Test Report: Docker_Linux_containerd 20062

                    
                      964562641276d457941dbb6d7cf4aa7e43312d02:2024-12-10:37415
                    
                

Test fail (1/330)

Order failed test Duration
368 TestStartStop/group/old-k8s-version/serial/SecondStart 379.68
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (379.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-280963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1210 00:25:07.616719  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:16.970227  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:36.882648  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:46.903924  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:46.910379  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:46.921820  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:46.943303  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:46.984862  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:47.066372  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:47.227749  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:47.549777  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:48.191892  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:48.578802  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:49.473332  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:52.035437  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:54.255456  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:54.261926  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:54.273411  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:54.294940  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:54.336461  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:54.417993  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:54.579567  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:54.900878  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:55.542547  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:56.824232  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:57.157212  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:57.932185  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:59.385514  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:04.507528  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:07.399265  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:14.749324  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:27.880721  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:35.230757  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:53.193095  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:53.199591  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:53.211109  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:53.232601  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:53.274583  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:53.356180  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:53.517744  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:53.839914  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:54.482121  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:55.764431  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:58.326514  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:03.448684  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:08.843130  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:10.500631  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.153590  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.160057  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.171507  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.192949  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.234529  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.316095  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.477720  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.690610  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:13.800083  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:14.441748  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:15.723857  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:16.192808  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:18.285966  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:19.854014  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:23.407811  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:33.649727  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:33.812457  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:34.172160  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:42.891419  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:42.897892  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:42.909347  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:42.930778  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:42.972289  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:43.053957  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:43.215929  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:43.538153  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:44.180132  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:45.462534  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:48.024138  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:53.146364  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:27:54.132163  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:28:03.388152  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:28:13.584295  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:28:15.134464  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:28:23.869772  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:28:30.764697  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/custom-flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:28:35.093739  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/flannel-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-280963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m17.576229017s)

                                                
                                                
-- stdout --
	* [old-k8s-version-280963] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-280963" primary control-plane node in "old-k8s-version-280963" cluster
	* Pulling base image v0.0.45-1730888964-19917 ...
	* Restarting existing docker container for "old-k8s-version-280963" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-280963 addons enable metrics-server
	
	* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:25:04.955946  869958 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:25:04.956096  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:25:04.956109  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:25:04.956116  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:25:04.956525  869958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1210 00:25:04.957354  869958 out.go:352] Setting JSON to false
	I1210 00:25:04.959263  869958 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":11249,"bootTime":1733779056,"procs":636,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:25:04.959396  869958 start.go:139] virtualization: kvm guest
	I1210 00:25:04.961100  869958 out.go:177] * [old-k8s-version-280963] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:25:04.962822  869958 notify.go:220] Checking for updates...
	I1210 00:25:04.962896  869958 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:25:04.964460  869958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:25:04.965964  869958 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1210 00:25:04.967799  869958 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	I1210 00:25:04.969313  869958 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:25:04.970642  869958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:25:04.972583  869958 config.go:182] Loaded profile config "old-k8s-version-280963": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1210 00:25:04.974529  869958 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1210 00:25:04.975710  869958 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:25:05.001125  869958 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1210 00:25:05.001300  869958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:25:05.062760  869958 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:75 SystemTime:2024-12-10 00:25:05.050309938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:25:05.062954  869958 docker.go:318] overlay module found
	I1210 00:25:05.064816  869958 out.go:177] * Using the docker driver based on existing profile
	I1210 00:25:05.066283  869958 start.go:297] selected driver: docker
	I1210 00:25:05.066302  869958 start.go:901] validating driver "docker" against &{Name:old-k8s-version-280963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-280963 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:25:05.066393  869958 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:25:05.067287  869958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:25:05.114952  869958 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:75 SystemTime:2024-12-10 00:25:05.105417382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:25:05.115361  869958 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:25:05.115394  869958 cni.go:84] Creating CNI manager for ""
	I1210 00:25:05.115445  869958 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 00:25:05.115491  869958 start.go:340] cluster config:
	{Name:old-k8s-version-280963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-280963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:25:05.117575  869958 out.go:177] * Starting "old-k8s-version-280963" primary control-plane node in "old-k8s-version-280963" cluster
	I1210 00:25:05.118962  869958 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1210 00:25:05.120396  869958 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1210 00:25:05.121598  869958 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1210 00:25:05.121642  869958 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1210 00:25:05.121659  869958 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I1210 00:25:05.121677  869958 cache.go:56] Caching tarball of preloaded images
	I1210 00:25:05.121801  869958 preload.go:172] Found /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 00:25:05.121817  869958 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1210 00:25:05.121960  869958 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/config.json ...
	I1210 00:25:05.143674  869958 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1210 00:25:05.143699  869958 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1210 00:25:05.143717  869958 cache.go:194] Successfully downloaded all kic artifacts
	I1210 00:25:05.143752  869958 start.go:360] acquireMachinesLock for old-k8s-version-280963: {Name:mk866f9896e80cc71597f575ad6ef1d7edb45190 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:25:05.143833  869958 start.go:364] duration metric: took 57.999µs to acquireMachinesLock for "old-k8s-version-280963"
	I1210 00:25:05.143858  869958 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:25:05.143866  869958 fix.go:54] fixHost starting: 
	I1210 00:25:05.144079  869958 cli_runner.go:164] Run: docker container inspect old-k8s-version-280963 --format={{.State.Status}}
	I1210 00:25:05.163680  869958 fix.go:112] recreateIfNeeded on old-k8s-version-280963: state=Stopped err=<nil>
	W1210 00:25:05.163723  869958 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:25:05.165617  869958 out.go:177] * Restarting existing docker container for "old-k8s-version-280963" ...
	I1210 00:25:05.167193  869958 cli_runner.go:164] Run: docker start old-k8s-version-280963
	I1210 00:25:05.475855  869958 cli_runner.go:164] Run: docker container inspect old-k8s-version-280963 --format={{.State.Status}}
	I1210 00:25:05.496427  869958 kic.go:430] container "old-k8s-version-280963" state is running.
	I1210 00:25:05.497034  869958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-280963
	I1210 00:25:05.517785  869958 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/config.json ...
	I1210 00:25:05.518140  869958 machine.go:93] provisionDockerMachine start ...
	I1210 00:25:05.518222  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:05.538086  869958 main.go:141] libmachine: Using SSH client type: native
	I1210 00:25:05.538385  869958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 33625 <nil> <nil>}
	I1210 00:25:05.538403  869958 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:25:05.539233  869958 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52728->127.0.0.1:33625: read: connection reset by peer
	I1210 00:25:08.674726  869958 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-280963
	
	I1210 00:25:08.674761  869958 ubuntu.go:169] provisioning hostname "old-k8s-version-280963"
	I1210 00:25:08.674875  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:08.695106  869958 main.go:141] libmachine: Using SSH client type: native
	I1210 00:25:08.695369  869958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 33625 <nil> <nil>}
	I1210 00:25:08.695396  869958 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-280963 && echo "old-k8s-version-280963" | sudo tee /etc/hostname
	I1210 00:25:08.839885  869958 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-280963
	
	I1210 00:25:08.839987  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:08.858288  869958 main.go:141] libmachine: Using SSH client type: native
	I1210 00:25:08.858477  869958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 33625 <nil> <nil>}
	I1210 00:25:08.858495  869958 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-280963' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-280963/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-280963' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:25:08.987602  869958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:25:08.987635  869958 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20062-527107/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-527107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-527107/.minikube}
	I1210 00:25:08.987667  869958 ubuntu.go:177] setting up certificates
	I1210 00:25:08.987680  869958 provision.go:84] configureAuth start
	I1210 00:25:08.987751  869958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-280963
	I1210 00:25:09.005179  869958 provision.go:143] copyHostCerts
	I1210 00:25:09.005259  869958 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-527107/.minikube/ca.pem, removing ...
	I1210 00:25:09.005283  869958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-527107/.minikube/ca.pem
	I1210 00:25:09.005367  869958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-527107/.minikube/ca.pem (1082 bytes)
	I1210 00:25:09.005497  869958 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-527107/.minikube/cert.pem, removing ...
	I1210 00:25:09.005509  869958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-527107/.minikube/cert.pem
	I1210 00:25:09.005547  869958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-527107/.minikube/cert.pem (1123 bytes)
	I1210 00:25:09.005620  869958 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-527107/.minikube/key.pem, removing ...
	I1210 00:25:09.005630  869958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-527107/.minikube/key.pem
	I1210 00:25:09.005665  869958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-527107/.minikube/key.pem (1679 bytes)
	I1210 00:25:09.005733  869958 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-527107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-280963 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-280963]
	I1210 00:25:09.107023  869958 provision.go:177] copyRemoteCerts
	I1210 00:25:09.107096  869958 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:25:09.107135  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:09.125290  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:09.220303  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 00:25:09.243742  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 00:25:09.269835  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:25:09.293843  869958 provision.go:87] duration metric: took 306.148134ms to configureAuth
	I1210 00:25:09.293876  869958 ubuntu.go:193] setting minikube options for container-runtime
	I1210 00:25:09.294074  869958 config.go:182] Loaded profile config "old-k8s-version-280963": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1210 00:25:09.294088  869958 machine.go:96] duration metric: took 3.775928341s to provisionDockerMachine
	I1210 00:25:09.294099  869958 start.go:293] postStartSetup for "old-k8s-version-280963" (driver="docker")
	I1210 00:25:09.294114  869958 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:25:09.294172  869958 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:25:09.294240  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:09.312574  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:09.408223  869958 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:25:09.411858  869958 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 00:25:09.411909  869958 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1210 00:25:09.411918  869958 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1210 00:25:09.411925  869958 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1210 00:25:09.411936  869958 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-527107/.minikube/addons for local assets ...
	I1210 00:25:09.411989  869958 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-527107/.minikube/files for local assets ...
	I1210 00:25:09.412101  869958 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/ssl/certs/5339162.pem -> 5339162.pem in /etc/ssl/certs
	I1210 00:25:09.412194  869958 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:25:09.421111  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/ssl/certs/5339162.pem --> /etc/ssl/certs/5339162.pem (1708 bytes)
	I1210 00:25:09.444868  869958 start.go:296] duration metric: took 150.747825ms for postStartSetup
	I1210 00:25:09.444962  869958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:25:09.445017  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:09.462877  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:09.556135  869958 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 00:25:09.561322  869958 fix.go:56] duration metric: took 4.417449431s for fixHost
	I1210 00:25:09.561358  869958 start.go:83] releasing machines lock for "old-k8s-version-280963", held for 4.417502543s
	I1210 00:25:09.561430  869958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-280963
	I1210 00:25:09.579184  869958 ssh_runner.go:195] Run: cat /version.json
	I1210 00:25:09.579242  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:09.579277  869958 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:25:09.579343  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:09.597163  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:09.597409  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:09.766625  869958 ssh_runner.go:195] Run: systemctl --version
	I1210 00:25:09.771804  869958 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 00:25:09.776413  869958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1210 00:25:09.794671  869958 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1210 00:25:09.794741  869958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:25:09.803442  869958 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 00:25:09.803474  869958 start.go:495] detecting cgroup driver to use...
	I1210 00:25:09.803511  869958 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 00:25:09.803564  869958 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 00:25:09.816520  869958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 00:25:09.828022  869958 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:25:09.828091  869958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:25:09.841277  869958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:25:09.853753  869958 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:25:09.936231  869958 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:25:10.007547  869958 docker.go:233] disabling docker service ...
	I1210 00:25:10.007711  869958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:25:10.020718  869958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:25:10.031812  869958 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:25:10.107125  869958 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:25:10.185441  869958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:25:10.197022  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:25:10.213625  869958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1210 00:25:10.223636  869958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 00:25:10.234165  869958 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 00:25:10.234230  869958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 00:25:10.245193  869958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 00:25:10.255178  869958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 00:25:10.265685  869958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 00:25:10.275924  869958 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:25:10.285519  869958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 00:25:10.295444  869958 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:25:10.303778  869958 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:25:10.312599  869958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:25:10.390119  869958 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 00:25:10.513095  869958 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 00:25:10.513190  869958 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 00:25:10.518071  869958 start.go:563] Will wait 60s for crictl version
	I1210 00:25:10.518159  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:10.523347  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:25:10.570593  869958 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1210 00:25:10.570671  869958 ssh_runner.go:195] Run: containerd --version
	I1210 00:25:10.594116  869958 ssh_runner.go:195] Run: containerd --version
	I1210 00:25:10.621002  869958 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1210 00:25:10.622294  869958 cli_runner.go:164] Run: docker network inspect old-k8s-version-280963 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 00:25:10.643550  869958 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 00:25:10.648089  869958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:25:10.662028  869958 kubeadm.go:883] updating cluster {Name:old-k8s-version-280963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-280963 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:25:10.662155  869958 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1210 00:25:10.662216  869958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:25:10.697231  869958 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 00:25:10.697310  869958 ssh_runner.go:195] Run: which lz4
	I1210 00:25:10.701083  869958 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:25:10.704660  869958 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:25:10.704694  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (472503869 bytes)
	I1210 00:25:11.742097  869958 containerd.go:563] duration metric: took 1.041055167s to copy over tarball
	I1210 00:25:11.742178  869958 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:25:14.476560  869958 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.734334522s)
	I1210 00:25:14.476608  869958 containerd.go:570] duration metric: took 2.734480705s to extract the tarball
	I1210 00:25:14.476619  869958 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:25:15.425434  869958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:25:15.507986  869958 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 00:25:15.617869  869958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:25:15.654728  869958 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 00:25:15.654756  869958 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 00:25:15.654868  869958 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:25:15.655160  869958 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 00:25:15.655188  869958 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:25:15.655349  869958 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:25:15.655392  869958 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:25:15.655495  869958 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 00:25:15.655517  869958 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:25:15.655163  869958 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:25:15.656477  869958 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:25:15.656866  869958 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:25:15.656876  869958 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:25:15.656876  869958 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 00:25:15.656936  869958 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 00:25:15.656944  869958 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:25:15.656961  869958 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:25:15.656964  869958 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:25:15.863560  869958 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I1210 00:25:15.863625  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.2
	I1210 00:25:15.868280  869958 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.20.0" and sha "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I1210 00:25:15.868348  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:25:15.879936  869958 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns:1.7.0" and sha "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I1210 00:25:15.880014  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns:1.7.0
	I1210 00:25:15.885334  869958 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.4.13-0" and sha "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I1210 00:25:15.885408  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.4.13-0
	I1210 00:25:15.885881  869958 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 00:25:15.885927  869958 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 00:25:15.885968  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:15.889448  869958 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 00:25:15.889500  869958 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:25:15.889541  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:15.890130  869958 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.20.0" and sha "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I1210 00:25:15.890182  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:25:15.891810  869958 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.20.0" and sha "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I1210 00:25:15.891863  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:25:15.895618  869958 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.20.0" and sha "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I1210 00:25:15.895675  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:25:15.905650  869958 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 00:25:15.905707  869958 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 00:25:15.905761  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:15.909440  869958 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 00:25:15.909496  869958 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:25:15.909540  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:15.909580  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:25:15.909647  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:25:15.915935  869958 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 00:25:15.916013  869958 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:25:15.916072  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:15.933561  869958 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 00:25:15.933604  869958 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 00:25:15.933625  869958 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:25:15.933641  869958 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:25:15.933670  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:15.933683  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:15.933690  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:25:15.960949  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:25:15.966484  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:25:15.966564  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:25:15.966583  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:25:15.966604  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:25:15.966657  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:25:16.035430  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:25:16.145718  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:25:16.155281  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:25:16.155426  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:25:16.155519  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:25:16.155607  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:25:16.155687  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:25:16.243390  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:25:16.350165  869958 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 00:25:16.356937  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:25:16.357022  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:25:16.357072  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:25:16.357145  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:25:16.357157  869958 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 00:25:16.433936  869958 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 00:25:16.528644  869958 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 00:25:16.528722  869958 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 00:25:16.528813  869958 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 00:25:16.528838  869958 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 00:25:16.933384  869958 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1210 00:25:16.933471  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:25:16.962460  869958 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 00:25:16.962573  869958 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:25:16.962649  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:25:16.966791  869958 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:25:17.304535  869958 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 00:25:17.304669  869958 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:25:17.308395  869958 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 00:25:17.308421  869958 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:25:17.308491  869958 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:25:18.362304  869958 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.053781543s)
	I1210 00:25:18.362341  869958 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 00:25:18.362401  869958 cache_images.go:92] duration metric: took 2.707617724s to LoadCachedImages
	W1210 00:25:18.362485  869958 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-527107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1210 00:25:18.362507  869958 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I1210 00:25:18.362645  869958 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-280963 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-280963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:25:18.362711  869958 ssh_runner.go:195] Run: sudo crictl info
	I1210 00:25:18.398423  869958 cni.go:84] Creating CNI manager for ""
	I1210 00:25:18.398446  869958 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 00:25:18.398457  869958 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:25:18.398531  869958 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-280963 NodeName:old-k8s-version-280963 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 00:25:18.398693  869958 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-280963"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:25:18.398757  869958 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 00:25:18.408496  869958 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:25:18.408580  869958 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:25:18.418508  869958 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1210 00:25:18.437815  869958 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:25:18.456538  869958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1210 00:25:18.474639  869958 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 00:25:18.478394  869958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:25:18.490087  869958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:25:18.577629  869958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:25:18.591376  869958 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963 for IP: 192.168.85.2
	I1210 00:25:18.591398  869958 certs.go:194] generating shared ca certs ...
	I1210 00:25:18.591415  869958 certs.go:226] acquiring lock for ca certs: {Name:mk98ae8901439369b17532a89b5c8e73a55c28a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:25:18.591563  869958 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-527107/.minikube/ca.key
	I1210 00:25:18.591600  869958 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-527107/.minikube/proxy-client-ca.key
	I1210 00:25:18.591609  869958 certs.go:256] generating profile certs ...
	I1210 00:25:18.591706  869958 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/client.key
	I1210 00:25:18.591760  869958 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/apiserver.key.32b39cbb
	I1210 00:25:18.591803  869958 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/proxy-client.key
	I1210 00:25:18.591901  869958 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/533916.pem (1338 bytes)
	W1210 00:25:18.591928  869958 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-527107/.minikube/certs/533916_empty.pem, impossibly tiny 0 bytes
	I1210 00:25:18.591935  869958 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:25:18.591959  869958 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:25:18.591980  869958 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:25:18.592000  869958 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/key.pem (1679 bytes)
	I1210 00:25:18.592038  869958 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/ssl/certs/5339162.pem (1708 bytes)
	I1210 00:25:18.592732  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:25:18.619479  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:25:18.644820  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:25:18.674166  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 00:25:18.705882  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 00:25:18.743356  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:25:18.767725  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:25:18.792265  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/old-k8s-version-280963/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:25:18.816809  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/ssl/certs/5339162.pem --> /usr/share/ca-certificates/5339162.pem (1708 bytes)
	I1210 00:25:18.841526  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:25:18.865096  869958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/certs/533916.pem --> /usr/share/ca-certificates/533916.pem (1338 bytes)
	I1210 00:25:18.888554  869958 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:25:18.906216  869958 ssh_runner.go:195] Run: openssl version
	I1210 00:25:18.911638  869958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/533916.pem && ln -fs /usr/share/ca-certificates/533916.pem /etc/ssl/certs/533916.pem"
	I1210 00:25:18.920873  869958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/533916.pem
	I1210 00:25:18.924625  869958 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:51 /usr/share/ca-certificates/533916.pem
	I1210 00:25:18.924680  869958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/533916.pem
	I1210 00:25:18.931840  869958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/533916.pem /etc/ssl/certs/51391683.0"
	I1210 00:25:18.941015  869958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5339162.pem && ln -fs /usr/share/ca-certificates/5339162.pem /etc/ssl/certs/5339162.pem"
	I1210 00:25:18.950413  869958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5339162.pem
	I1210 00:25:18.954364  869958 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:51 /usr/share/ca-certificates/5339162.pem
	I1210 00:25:18.954431  869958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5339162.pem
	I1210 00:25:18.961069  869958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5339162.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:25:18.969677  869958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:25:18.979122  869958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:25:18.982999  869958 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:25:18.983088  869958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:25:18.989542  869958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:25:18.998349  869958 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:25:19.001915  869958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:25:19.008722  869958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:25:19.015556  869958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:25:19.022056  869958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:25:19.029008  869958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:25:19.036092  869958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:25:19.042762  869958 kubeadm.go:392] StartCluster: {Name:old-k8s-version-280963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-280963 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:25:19.042956  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 00:25:19.043016  869958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:25:19.079621  869958 cri.go:89] found id: ""
	I1210 00:25:19.079685  869958 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:25:19.088408  869958 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:25:19.088428  869958 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:25:19.088468  869958 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:25:19.096662  869958 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:25:19.097725  869958 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-280963" does not appear in /home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1210 00:25:19.098351  869958 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-527107/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-280963" cluster setting kubeconfig missing "old-k8s-version-280963" context setting]
	I1210 00:25:19.099421  869958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-527107/kubeconfig: {Name:mk47c0b52ce4821be2777fdd40884aa11f573a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:25:19.101544  869958 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:25:19.110651  869958 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 00:25:19.110692  869958 kubeadm.go:597] duration metric: took 22.258553ms to restartPrimaryControlPlane
	I1210 00:25:19.110705  869958 kubeadm.go:394] duration metric: took 67.955274ms to StartCluster
	I1210 00:25:19.110727  869958 settings.go:142] acquiring lock: {Name:mk0114e7c414efdfe48670d68c91542cc6018bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:25:19.110822  869958 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1210 00:25:19.112623  869958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-527107/kubeconfig: {Name:mk47c0b52ce4821be2777fdd40884aa11f573a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:25:19.112935  869958 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 00:25:19.113076  869958 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:25:19.113168  869958 config.go:182] Loaded profile config "old-k8s-version-280963": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1210 00:25:19.113195  869958 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-280963"
	I1210 00:25:19.113220  869958 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-280963"
	I1210 00:25:19.113224  869958 addons.go:69] Setting dashboard=true in profile "old-k8s-version-280963"
	W1210 00:25:19.113233  869958 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:25:19.113238  869958 addons.go:234] Setting addon dashboard=true in "old-k8s-version-280963"
	I1210 00:25:19.113241  869958 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-280963"
	I1210 00:25:19.113268  869958 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-280963"
	I1210 00:25:19.113272  869958 host.go:66] Checking if "old-k8s-version-280963" exists ...
	W1210 00:25:19.113246  869958 addons.go:243] addon dashboard should already be in state true
	I1210 00:25:19.113340  869958 host.go:66] Checking if "old-k8s-version-280963" exists ...
	I1210 00:25:19.113266  869958 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-280963"
	I1210 00:25:19.113437  869958 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-280963"
	W1210 00:25:19.113449  869958 addons.go:243] addon metrics-server should already be in state true
	I1210 00:25:19.113483  869958 host.go:66] Checking if "old-k8s-version-280963" exists ...
	I1210 00:25:19.113610  869958 cli_runner.go:164] Run: docker container inspect old-k8s-version-280963 --format={{.State.Status}}
	I1210 00:25:19.113776  869958 cli_runner.go:164] Run: docker container inspect old-k8s-version-280963 --format={{.State.Status}}
	I1210 00:25:19.113921  869958 cli_runner.go:164] Run: docker container inspect old-k8s-version-280963 --format={{.State.Status}}
	I1210 00:25:19.114062  869958 cli_runner.go:164] Run: docker container inspect old-k8s-version-280963 --format={{.State.Status}}
	I1210 00:25:19.117140  869958 out.go:177] * Verifying Kubernetes components...
	I1210 00:25:19.118522  869958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:25:19.140966  869958 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:25:19.142496  869958 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:25:19.142523  869958 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:25:19.142586  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:19.143373  869958 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-280963"
	W1210 00:25:19.143396  869958 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:25:19.143426  869958 host.go:66] Checking if "old-k8s-version-280963" exists ...
	I1210 00:25:19.143988  869958 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 00:25:19.144201  869958 cli_runner.go:164] Run: docker container inspect old-k8s-version-280963 --format={{.State.Status}}
	I1210 00:25:19.145095  869958 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:25:19.146708  869958 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1210 00:25:19.146831  869958 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:25:19.146870  869958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:25:19.146930  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:19.155269  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 00:25:19.155300  869958 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 00:25:19.155368  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:19.173431  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:19.175360  869958 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:25:19.175386  869958 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:25:19.175466  869958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280963
	I1210 00:25:19.176455  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:19.179586  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:19.200164  869958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/old-k8s-version-280963/id_rsa Username:docker}
	I1210 00:25:19.211097  869958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:25:19.235931  869958 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-280963" to be "Ready" ...
	I1210 00:25:19.289224  869958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:25:19.289253  869958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:25:19.292756  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 00:25:19.292785  869958 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 00:25:19.293449  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:25:19.308631  869958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:25:19.308664  869958 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:25:19.311612  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 00:25:19.311637  869958 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 00:25:19.330888  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:25:19.332985  869958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:25:19.333012  869958 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:25:19.342418  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 00:25:19.342451  869958 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 00:25:19.352183  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:25:19.362013  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 00:25:19.362048  869958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1210 00:25:19.441057  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.441098  869958 retry.go:31] will retry after 136.557929ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.441340  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 00:25:19.441364  869958 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 00:25:19.461303  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 00:25:19.461332  869958 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1210 00:25:19.471682  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.471724  869958 retry.go:31] will retry after 256.782764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1210 00:25:19.533140  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.533178  869958 retry.go:31] will retry after 356.790722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.533418  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 00:25:19.533443  869958 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 00:25:19.551744  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 00:25:19.551774  869958 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 00:25:19.572234  869958 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 00:25:19.572272  869958 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 00:25:19.578363  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:25:19.590417  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 00:25:19.642318  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.642355  869958 retry.go:31] will retry after 409.498692ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1210 00:25:19.658043  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.658078  869958 retry.go:31] will retry after 336.192915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.729280  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 00:25:19.790632  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.790674  869958 retry.go:31] will retry after 271.507932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.891006  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1210 00:25:19.948782  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.948833  869958 retry.go:31] will retry after 480.017449ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:19.995178  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 00:25:20.052617  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 00:25:20.055595  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.055634  869958 retry.go:31] will retry after 258.237786ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.062752  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 00:25:20.113816  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.113867  869958 retry.go:31] will retry after 337.236207ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1210 00:25:20.134111  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.134150  869958 retry.go:31] will retry after 452.648164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.314818  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 00:25:20.376402  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.376441  869958 retry.go:31] will retry after 627.261557ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.429572  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:25:20.452029  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 00:25:20.502395  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.502427  869958 retry.go:31] will retry after 599.949333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1210 00:25:20.534055  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.534115  869958 retry.go:31] will retry after 863.044778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.587212  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 00:25:20.649358  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:20.649396  869958 retry.go:31] will retry after 867.15191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.004468  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 00:25:21.066297  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.066332  869958 retry.go:31] will retry after 1.033510101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.102521  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1210 00:25:21.163842  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.163885  869958 retry.go:31] will retry after 525.41308ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.237452  869958 node_ready.go:53] error getting node "old-k8s-version-280963": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-280963": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 00:25:21.397745  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 00:25:21.459896  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.459946  869958 retry.go:31] will retry after 1.529190224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.517148  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 00:25:21.578278  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.578320  869958 retry.go:31] will retry after 1.470604524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.690510  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1210 00:25:21.750121  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:21.750160  869958 retry.go:31] will retry after 1.298538372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:22.100712  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 00:25:22.164081  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:22.164127  869958 retry.go:31] will retry after 1.216792297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:22.990077  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:25:23.048850  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:25:23.048998  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 00:25:23.049271  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:23.049304  869958 retry.go:31] will retry after 1.666515899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1210 00:25:23.109611  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:23.109655  869958 retry.go:31] will retry after 2.559329091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1210 00:25:23.116198  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:23.116245  869958 retry.go:31] will retry after 2.643813451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:23.382152  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 00:25:23.445733  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:23.445767  869958 retry.go:31] will retry after 1.352363896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:23.736454  869958 node_ready.go:53] error getting node "old-k8s-version-280963": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-280963": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 00:25:24.717038  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:25:24.798881  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 00:25:24.945734  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:24.945869  869958 retry.go:31] will retry after 2.598618248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1210 00:25:25.053822  869958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:25.053860  869958 retry.go:31] will retry after 1.646204051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1210 00:25:25.669421  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:25:25.760567  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:25:26.700707  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 00:25:27.545614  869958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:25:29.533710  869958 node_ready.go:49] node "old-k8s-version-280963" has status "Ready":"True"
	I1210 00:25:29.533820  869958 node_ready.go:38] duration metric: took 10.297855721s for node "old-k8s-version-280963" to be "Ready" ...
	I1210 00:25:29.533846  869958 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:25:29.639286  869958 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-45ksb" in "kube-system" namespace to be "Ready" ...
	I1210 00:25:29.743362  869958 pod_ready.go:93] pod "coredns-74ff55c5b-45ksb" in "kube-system" namespace has status "Ready":"True"
	I1210 00:25:29.743463  869958 pod_ready.go:82] duration metric: took 104.062424ms for pod "coredns-74ff55c5b-45ksb" in "kube-system" namespace to be "Ready" ...
	I1210 00:25:29.743491  869958 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-280963" in "kube-system" namespace to be "Ready" ...
	I1210 00:25:31.049317  869958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.379819734s)
	I1210 00:25:31.049428  869958 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-280963"
	I1210 00:25:31.049432  869958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.288821092s)
	I1210 00:25:31.634955  869958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.089287795s)
	I1210 00:25:31.635039  869958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.93414481s)
	I1210 00:25:31.636699  869958 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-280963 addons enable metrics-server
	
	I1210 00:25:31.638191  869958 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	I1210 00:25:31.640257  869958 addons.go:510] duration metric: took 12.527203015s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
	I1210 00:25:31.755358  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:34.248918  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:36.249572  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:38.249821  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:40.749001  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:42.749856  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:45.249201  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:47.250189  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:49.751627  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:52.254716  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:54.751535  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:57.249845  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:25:59.750032  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:02.249952  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:04.250715  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:06.749892  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:09.249209  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:11.250233  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:13.749362  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:15.749804  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:17.751647  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:20.250023  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:22.251107  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:24.251687  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:26.750510  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:29.252385  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:31.751344  869958 pod_ready.go:103] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:32.254107  869958 pod_ready.go:93] pod "etcd-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"True"
	I1210 00:26:32.254152  869958 pod_ready.go:82] duration metric: took 1m2.510643569s for pod "etcd-old-k8s-version-280963" in "kube-system" namespace to be "Ready" ...
	I1210 00:26:32.254172  869958 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-280963" in "kube-system" namespace to be "Ready" ...
	I1210 00:26:32.260304  869958 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"True"
	I1210 00:26:32.260332  869958 pod_ready.go:82] duration metric: took 6.135724ms for pod "kube-apiserver-old-k8s-version-280963" in "kube-system" namespace to be "Ready" ...
	I1210 00:26:32.260347  869958 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace to be "Ready" ...
	I1210 00:26:34.268497  869958 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:36.766637  869958 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:39.266049  869958 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:41.267319  869958 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:43.765455  869958 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:45.768506  869958 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:48.267369  869958 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:50.767018  869958 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:53.267354  869958 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"True"
	I1210 00:26:53.267387  869958 pod_ready.go:82] duration metric: took 21.007031158s for pod "kube-controller-manager-old-k8s-version-280963" in "kube-system" namespace to be "Ready" ...
	I1210 00:26:53.267402  869958 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qb2z4" in "kube-system" namespace to be "Ready" ...
	I1210 00:26:53.272279  869958 pod_ready.go:93] pod "kube-proxy-qb2z4" in "kube-system" namespace has status "Ready":"True"
	I1210 00:26:53.272302  869958 pod_ready.go:82] duration metric: took 4.892582ms for pod "kube-proxy-qb2z4" in "kube-system" namespace to be "Ready" ...
	I1210 00:26:53.272311  869958 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-280963" in "kube-system" namespace to be "Ready" ...
	I1210 00:26:55.278374  869958 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:57.778574  869958 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:26:59.780845  869958 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:00.279229  869958 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-280963" in "kube-system" namespace has status "Ready":"True"
	I1210 00:27:00.279257  869958 pod_ready.go:82] duration metric: took 7.006938026s for pod "kube-scheduler-old-k8s-version-280963" in "kube-system" namespace to be "Ready" ...
	I1210 00:27:00.279355  869958 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace to be "Ready" ...
	I1210 00:27:02.285732  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:04.785454  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:06.785491  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:09.293784  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:11.784823  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:14.285252  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:16.285550  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:18.285842  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:20.286226  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:22.785073  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:24.785564  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:27.286243  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:29.803161  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:32.285546  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:34.286118  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:36.785634  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:38.785899  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:41.285224  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:43.285577  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:45.285991  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:47.784625  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:49.786301  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:52.285363  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:54.285460  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:56.786115  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:27:59.285254  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:01.285483  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:03.785176  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:05.785246  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:07.785452  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:10.284795  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:12.285232  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:14.785094  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:16.785660  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:19.287937  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:21.784924  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:23.785171  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:25.785921  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:28.285782  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:30.285823  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:32.785726  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:35.285944  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:37.785251  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:40.284596  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:42.285244  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:44.785547  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:47.284970  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:49.286322  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:51.786544  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:54.286136  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:56.287180  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:28:58.785990  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:00.786571  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:02.787217  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:05.284892  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:07.286113  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:09.287066  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:11.787108  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:14.286002  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:16.286339  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:18.785927  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:20.786144  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:23.285107  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:25.285691  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:27.786048  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:30.284394  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:32.285567  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:34.789978  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:37.285571  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:39.785467  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:41.786195  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:44.285629  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:46.286997  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:48.784966  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:50.785565  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:53.287484  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:55.785238  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:58.285947  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:00.784947  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:02.785627  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:05.285425  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:07.785233  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:10.285053  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:12.785346  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:15.284585  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:17.285141  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:19.785686  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:21.786049  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:24.285409  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:26.286022  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:28.785097  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:30.785142  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:33.285545  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:35.785510  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:38.284978  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:40.784638  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:42.784707  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:44.784825  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:47.285489  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:49.286104  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:51.785143  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:53.786017  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:56.285495  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:58.285964  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:31:00.285755  869958 pod_ready.go:82] duration metric: took 4m0.006380848s for pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace to be "Ready" ...
	E1210 00:31:00.285781  869958 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 00:31:00.285790  869958 pod_ready.go:39] duration metric: took 5m30.751897187s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:31:00.285822  869958 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:31:00.285858  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:31:00.285917  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:31:00.324417  869958 cri.go:89] found id: "9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d"
	I1210 00:31:00.324440  869958 cri.go:89] found id: ""
	I1210 00:31:00.324448  869958 logs.go:282] 1 containers: [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d]
	I1210 00:31:00.324499  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.328595  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 00:31:00.328691  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:31:00.364828  869958 cri.go:89] found id: "de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2"
	I1210 00:31:00.364857  869958 cri.go:89] found id: ""
	I1210 00:31:00.364868  869958 logs.go:282] 1 containers: [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2]
	I1210 00:31:00.364938  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.368615  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 00:31:00.368696  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:31:00.403140  869958 cri.go:89] found id: "d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0"
	I1210 00:31:00.403164  869958 cri.go:89] found id: ""
	I1210 00:31:00.403174  869958 logs.go:282] 1 containers: [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0]
	I1210 00:31:00.403233  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.406693  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:31:00.406754  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:31:00.440261  869958 cri.go:89] found id: "e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07"
	I1210 00:31:00.440286  869958 cri.go:89] found id: ""
	I1210 00:31:00.440294  869958 logs.go:282] 1 containers: [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07]
	I1210 00:31:00.440356  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.443836  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:31:00.443908  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:31:00.478920  869958 cri.go:89] found id: "930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1"
	I1210 00:31:00.478945  869958 cri.go:89] found id: ""
	I1210 00:31:00.478955  869958 logs.go:282] 1 containers: [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1]
	I1210 00:31:00.479020  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.482648  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:31:00.482713  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:31:00.517931  869958 cri.go:89] found id: "7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82"
	I1210 00:31:00.517959  869958 cri.go:89] found id: ""
	I1210 00:31:00.517969  869958 logs.go:282] 1 containers: [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82]
	I1210 00:31:00.518027  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.522393  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 00:31:00.522470  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:31:00.558076  869958 cri.go:89] found id: "1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a"
	I1210 00:31:00.558099  869958 cri.go:89] found id: ""
	I1210 00:31:00.558107  869958 logs.go:282] 1 containers: [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a]
	I1210 00:31:00.558159  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.561741  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:31:00.561812  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:31:00.598626  869958 cri.go:89] found id: "b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e"
	I1210 00:31:00.598664  869958 cri.go:89] found id: "5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24"
	I1210 00:31:00.598674  869958 cri.go:89] found id: ""
	I1210 00:31:00.598682  869958 logs.go:282] 2 containers: [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e 5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24]
	I1210 00:31:00.598746  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.602345  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.605648  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:31:00.605713  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:31:00.638537  869958 cri.go:89] found id: "71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e"
	I1210 00:31:00.638564  869958 cri.go:89] found id: ""
	I1210 00:31:00.638574  869958 logs.go:282] 1 containers: [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e]
	I1210 00:31:00.638635  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.642267  869958 logs.go:123] Gathering logs for kubelet ...
	I1210 00:31:00.642297  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 00:31:00.684072  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:36 old-k8s-version-280963 kubelet[1066]: E1210 00:25:36.013371    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.684251  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:36 old-k8s-version-280963 kubelet[1066]: E1210 00:25:36.276847    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.686239  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:49 old-k8s-version-280963 kubelet[1066]: E1210 00:25:49.092116    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.687741  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:00 old-k8s-version-280963 kubelet[1066]: E1210 00:26:00.341400    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.687978  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:01 old-k8s-version-280963 kubelet[1066]: E1210 00:26:01.348361    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.688111  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:02 old-k8s-version-280963 kubelet[1066]: E1210 00:26:02.063829    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.688445  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:02 old-k8s-version-280963 kubelet[1066]: E1210 00:26:02.351893    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.690436  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:14 old-k8s-version-280963 kubelet[1066]: E1210 00:26:14.082796    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.691134  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:16 old-k8s-version-280963 kubelet[1066]: E1210 00:26:16.385007    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.691375  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:20 old-k8s-version-280963 kubelet[1066]: E1210 00:26:20.929425    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.691523  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:25 old-k8s-version-280963 kubelet[1066]: E1210 00:26:25.063820    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.691758  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:32 old-k8s-version-280963 kubelet[1066]: E1210 00:26:32.063614    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.691889  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:37 old-k8s-version-280963 kubelet[1066]: E1210 00:26:37.063859    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.692313  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:45 old-k8s-version-280963 kubelet[1066]: E1210 00:26:45.451805    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.692572  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:50 old-k8s-version-280963 kubelet[1066]: E1210 00:26:50.929571    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.692717  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:51 old-k8s-version-280963 kubelet[1066]: E1210 00:26:51.063691    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.692950  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:04 old-k8s-version-280963 kubelet[1066]: E1210 00:27:04.063486    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.694659  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:06 old-k8s-version-280963 kubelet[1066]: E1210 00:27:06.100485    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.694960  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:16 old-k8s-version-280963 kubelet[1066]: E1210 00:27:16.063301    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.695110  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:18 old-k8s-version-280963 kubelet[1066]: E1210 00:27:18.063936    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.695245  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:29 old-k8s-version-280963 kubelet[1066]: E1210 00:27:29.063910    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.695668  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:30 old-k8s-version-280963 kubelet[1066]: E1210 00:27:30.551122    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.695901  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:31 old-k8s-version-280963 kubelet[1066]: E1210 00:27:31.554624    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.696137  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:42 old-k8s-version-280963 kubelet[1066]: E1210 00:27:42.063651    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.696291  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:43 old-k8s-version-280963 kubelet[1066]: E1210 00:27:43.063770    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.696535  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:54 old-k8s-version-280963 kubelet[1066]: E1210 00:27:54.063558    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.696667  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:55 old-k8s-version-280963 kubelet[1066]: E1210 00:27:55.063561    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.696899  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:05 old-k8s-version-280963 kubelet[1066]: E1210 00:28:05.063379    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.697036  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:10 old-k8s-version-280963 kubelet[1066]: E1210 00:28:10.063837    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.697268  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:18 old-k8s-version-280963 kubelet[1066]: E1210 00:28:18.063477    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.697399  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:23 old-k8s-version-280963 kubelet[1066]: E1210 00:28:23.063704    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.697631  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:29 old-k8s-version-280963 kubelet[1066]: E1210 00:28:29.063218    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.699384  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:36 old-k8s-version-280963 kubelet[1066]: E1210 00:28:36.089234    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.699619  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:40 old-k8s-version-280963 kubelet[1066]: E1210 00:28:40.063266    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.699750  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:50 old-k8s-version-280963 kubelet[1066]: E1210 00:28:50.063870    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.700169  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:55 old-k8s-version-280963 kubelet[1066]: E1210 00:28:55.726230    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.700403  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:00 old-k8s-version-280963 kubelet[1066]: E1210 00:29:00.929346    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.700534  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:01 old-k8s-version-280963 kubelet[1066]: E1210 00:29:01.063931    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.700665  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:12 old-k8s-version-280963 kubelet[1066]: E1210 00:29:12.063883    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.700897  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:13 old-k8s-version-280963 kubelet[1066]: E1210 00:29:13.063415    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.701157  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:25 old-k8s-version-280963 kubelet[1066]: E1210 00:29:25.063471    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.701316  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:25 old-k8s-version-280963 kubelet[1066]: E1210 00:29:25.063913    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.701550  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:36 old-k8s-version-280963 kubelet[1066]: E1210 00:29:36.063693    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.701682  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:36 old-k8s-version-280963 kubelet[1066]: E1210 00:29:36.063872    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.701914  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:49 old-k8s-version-280963 kubelet[1066]: E1210 00:29:49.063306    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.702050  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:50 old-k8s-version-280963 kubelet[1066]: E1210 00:29:50.063838    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.702287  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:03 old-k8s-version-280963 kubelet[1066]: E1210 00:30:03.063224    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.702419  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:05 old-k8s-version-280963 kubelet[1066]: E1210 00:30:05.063807    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.702550  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:17 old-k8s-version-280963 kubelet[1066]: E1210 00:30:17.063774    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.702784  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:18 old-k8s-version-280963 kubelet[1066]: E1210 00:30:18.063380    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.703080  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:29 old-k8s-version-280963 kubelet[1066]: E1210 00:30:29.063392    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.703219  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:32 old-k8s-version-280963 kubelet[1066]: E1210 00:30:32.063891    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.703456  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: E1210 00:30:41.063206    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.703587  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.703818  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.703952  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1210 00:31:00.703964  869958 logs.go:123] Gathering logs for kube-proxy [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1] ...
	I1210 00:31:00.703989  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1"
	I1210 00:31:00.739278  869958 logs.go:123] Gathering logs for kube-controller-manager [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82] ...
	I1210 00:31:00.739323  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82"
	I1210 00:31:00.809749  869958 logs.go:123] Gathering logs for storage-provisioner [5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24] ...
	I1210 00:31:00.809793  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24"
	I1210 00:31:00.843898  869958 logs.go:123] Gathering logs for containerd ...
	I1210 00:31:00.843932  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 00:31:00.905189  869958 logs.go:123] Gathering logs for kube-apiserver [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d] ...
	I1210 00:31:00.905248  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d"
	I1210 00:31:00.975171  869958 logs.go:123] Gathering logs for kube-scheduler [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07] ...
	I1210 00:31:00.975214  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07"
	I1210 00:31:01.018685  869958 logs.go:123] Gathering logs for kubernetes-dashboard [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e] ...
	I1210 00:31:01.018727  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e"
	I1210 00:31:01.055194  869958 logs.go:123] Gathering logs for dmesg ...
	I1210 00:31:01.055228  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:31:01.082490  869958 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:31:01.082531  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:31:01.188477  869958 logs.go:123] Gathering logs for etcd [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2] ...
	I1210 00:31:01.188515  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2"
	I1210 00:31:01.231162  869958 logs.go:123] Gathering logs for kindnet [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a] ...
	I1210 00:31:01.231200  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a"
	I1210 00:31:01.270495  869958 logs.go:123] Gathering logs for storage-provisioner [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e] ...
	I1210 00:31:01.270532  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e"
	I1210 00:31:01.304676  869958 logs.go:123] Gathering logs for container status ...
	I1210 00:31:01.304717  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:31:01.342082  869958 logs.go:123] Gathering logs for coredns [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0] ...
	I1210 00:31:01.342114  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0"
	I1210 00:31:01.377229  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:01.377257  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1210 00:31:01.377336  869958 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1210 00:31:01.377354  869958 out.go:270]   Dec 10 00:30:32 old-k8s-version-280963 kubelet[1066]: E1210 00:30:32.063891    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 10 00:30:32 old-k8s-version-280963 kubelet[1066]: E1210 00:30:32.063891    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:01.377363  869958 out.go:270]   Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: E1210 00:30:41.063206    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	  Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: E1210 00:30:41.063206    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:01.377375  869958 out.go:270]   Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:01.377384  869958 out.go:270]   Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	  Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:01.377397  869958 out.go:270]   Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1210 00:31:01.377405  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:01.377416  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:31:11.378266  869958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:31:11.390994  869958 api_server.go:72] duration metric: took 5m52.278015509s to wait for apiserver process to appear ...
	I1210 00:31:11.391028  869958 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:31:11.391084  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:31:11.391155  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:31:11.425078  869958 cri.go:89] found id: "9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d"
	I1210 00:31:11.425104  869958 cri.go:89] found id: ""
	I1210 00:31:11.425113  869958 logs.go:282] 1 containers: [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d]
	I1210 00:31:11.425183  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.428759  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 00:31:11.428836  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:31:11.463276  869958 cri.go:89] found id: "de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2"
	I1210 00:31:11.463305  869958 cri.go:89] found id: ""
	I1210 00:31:11.463313  869958 logs.go:282] 1 containers: [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2]
	I1210 00:31:11.463360  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.467102  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 00:31:11.467171  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:31:11.503957  869958 cri.go:89] found id: "d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0"
	I1210 00:31:11.504006  869958 cri.go:89] found id: ""
	I1210 00:31:11.504016  869958 logs.go:282] 1 containers: [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0]
	I1210 00:31:11.504079  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.507966  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:31:11.508041  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:31:11.542392  869958 cri.go:89] found id: "e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07"
	I1210 00:31:11.542415  869958 cri.go:89] found id: ""
	I1210 00:31:11.542422  869958 logs.go:282] 1 containers: [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07]
	I1210 00:31:11.542484  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.546043  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:31:11.546105  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:31:11.583274  869958 cri.go:89] found id: "930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1"
	I1210 00:31:11.583305  869958 cri.go:89] found id: ""
	I1210 00:31:11.583316  869958 logs.go:282] 1 containers: [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1]
	I1210 00:31:11.583376  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.587533  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:31:11.587622  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:31:11.622287  869958 cri.go:89] found id: "7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82"
	I1210 00:31:11.622329  869958 cri.go:89] found id: ""
	I1210 00:31:11.622338  869958 logs.go:282] 1 containers: [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82]
	I1210 00:31:11.622399  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.626227  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 00:31:11.626300  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:31:11.661096  869958 cri.go:89] found id: "1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a"
	I1210 00:31:11.661119  869958 cri.go:89] found id: ""
	I1210 00:31:11.661126  869958 logs.go:282] 1 containers: [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a]
	I1210 00:31:11.661173  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.664907  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:31:11.664974  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:31:11.701413  869958 cri.go:89] found id: "71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e"
	I1210 00:31:11.701439  869958 cri.go:89] found id: ""
	I1210 00:31:11.701448  869958 logs.go:282] 1 containers: [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e]
	I1210 00:31:11.701498  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.705199  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:31:11.705268  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:31:11.739637  869958 cri.go:89] found id: "b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e"
	I1210 00:31:11.739669  869958 cri.go:89] found id: "5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24"
	I1210 00:31:11.739674  869958 cri.go:89] found id: ""
	I1210 00:31:11.739682  869958 logs.go:282] 2 containers: [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e 5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24]
	I1210 00:31:11.739748  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.743857  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.747864  869958 logs.go:123] Gathering logs for kube-scheduler [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07] ...
	I1210 00:31:11.747897  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07"
	I1210 00:31:11.787539  869958 logs.go:123] Gathering logs for kube-controller-manager [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82] ...
	I1210 00:31:11.787577  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82"
	I1210 00:31:11.854239  869958 logs.go:123] Gathering logs for kubernetes-dashboard [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e] ...
	I1210 00:31:11.854286  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e"
	I1210 00:31:11.890628  869958 logs.go:123] Gathering logs for storage-provisioner [5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24] ...
	I1210 00:31:11.890659  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24"
	I1210 00:31:11.924933  869958 logs.go:123] Gathering logs for dmesg ...
	I1210 00:31:11.924977  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:31:11.952597  869958 logs.go:123] Gathering logs for kube-apiserver [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d] ...
	I1210 00:31:11.952639  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d"
	I1210 00:31:12.008186  869958 logs.go:123] Gathering logs for etcd [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2] ...
	I1210 00:31:12.008225  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2"
	I1210 00:31:12.050981  869958 logs.go:123] Gathering logs for kindnet [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a] ...
	I1210 00:31:12.051019  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a"
	I1210 00:31:12.092306  869958 logs.go:123] Gathering logs for storage-provisioner [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e] ...
	I1210 00:31:12.092348  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e"
	I1210 00:31:12.126824  869958 logs.go:123] Gathering logs for kubelet ...
	I1210 00:31:12.126877  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 00:31:12.167149  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:36 old-k8s-version-280963 kubelet[1066]: E1210 00:25:36.013371    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.167339  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:36 old-k8s-version-280963 kubelet[1066]: E1210 00:25:36.276847    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.169400  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:49 old-k8s-version-280963 kubelet[1066]: E1210 00:25:49.092116    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.170983  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:00 old-k8s-version-280963 kubelet[1066]: E1210 00:26:00.341400    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.171225  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:01 old-k8s-version-280963 kubelet[1066]: E1210 00:26:01.348361    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.171364  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:02 old-k8s-version-280963 kubelet[1066]: E1210 00:26:02.063829    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.171704  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:02 old-k8s-version-280963 kubelet[1066]: E1210 00:26:02.351893    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.173755  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:14 old-k8s-version-280963 kubelet[1066]: E1210 00:26:14.082796    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.174505  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:16 old-k8s-version-280963 kubelet[1066]: E1210 00:26:16.385007    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.174745  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:20 old-k8s-version-280963 kubelet[1066]: E1210 00:26:20.929425    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.174905  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:25 old-k8s-version-280963 kubelet[1066]: E1210 00:26:25.063820    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.175143  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:32 old-k8s-version-280963 kubelet[1066]: E1210 00:26:32.063614    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.175321  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:37 old-k8s-version-280963 kubelet[1066]: E1210 00:26:37.063859    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.175745  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:45 old-k8s-version-280963 kubelet[1066]: E1210 00:26:45.451805    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.175980  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:50 old-k8s-version-280963 kubelet[1066]: E1210 00:26:50.929571    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.176120  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:51 old-k8s-version-280963 kubelet[1066]: E1210 00:26:51.063691    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.176358  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:04 old-k8s-version-280963 kubelet[1066]: E1210 00:27:04.063486    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.178089  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:06 old-k8s-version-280963 kubelet[1066]: E1210 00:27:06.100485    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.178356  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:16 old-k8s-version-280963 kubelet[1066]: E1210 00:27:16.063301    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.178495  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:18 old-k8s-version-280963 kubelet[1066]: E1210 00:27:18.063936    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.178628  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:29 old-k8s-version-280963 kubelet[1066]: E1210 00:27:29.063910    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.179087  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:30 old-k8s-version-280963 kubelet[1066]: E1210 00:27:30.551122    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.179328  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:31 old-k8s-version-280963 kubelet[1066]: E1210 00:27:31.554624    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.179563  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:42 old-k8s-version-280963 kubelet[1066]: E1210 00:27:42.063651    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.179696  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:43 old-k8s-version-280963 kubelet[1066]: E1210 00:27:43.063770    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.179934  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:54 old-k8s-version-280963 kubelet[1066]: E1210 00:27:54.063558    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.180068  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:55 old-k8s-version-280963 kubelet[1066]: E1210 00:27:55.063561    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.180308  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:05 old-k8s-version-280963 kubelet[1066]: E1210 00:28:05.063379    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.180463  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:10 old-k8s-version-280963 kubelet[1066]: E1210 00:28:10.063837    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.180701  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:18 old-k8s-version-280963 kubelet[1066]: E1210 00:28:18.063477    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.180836  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:23 old-k8s-version-280963 kubelet[1066]: E1210 00:28:23.063704    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.181073  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:29 old-k8s-version-280963 kubelet[1066]: E1210 00:28:29.063218    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.182823  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:36 old-k8s-version-280963 kubelet[1066]: E1210 00:28:36.089234    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.183092  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:40 old-k8s-version-280963 kubelet[1066]: E1210 00:28:40.063266    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.183227  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:50 old-k8s-version-280963 kubelet[1066]: E1210 00:28:50.063870    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.183655  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:55 old-k8s-version-280963 kubelet[1066]: E1210 00:28:55.726230    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.183890  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:00 old-k8s-version-280963 kubelet[1066]: E1210 00:29:00.929346    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.184024  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:01 old-k8s-version-280963 kubelet[1066]: E1210 00:29:01.063931    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.184157  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:12 old-k8s-version-280963 kubelet[1066]: E1210 00:29:12.063883    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.184400  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:13 old-k8s-version-280963 kubelet[1066]: E1210 00:29:13.063415    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.184636  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:25 old-k8s-version-280963 kubelet[1066]: E1210 00:29:25.063471    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.184769  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:25 old-k8s-version-280963 kubelet[1066]: E1210 00:29:25.063913    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.185004  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:36 old-k8s-version-280963 kubelet[1066]: E1210 00:29:36.063693    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.185138  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:36 old-k8s-version-280963 kubelet[1066]: E1210 00:29:36.063872    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.185382  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:49 old-k8s-version-280963 kubelet[1066]: E1210 00:29:49.063306    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.185515  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:50 old-k8s-version-280963 kubelet[1066]: E1210 00:29:50.063838    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.185750  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:03 old-k8s-version-280963 kubelet[1066]: E1210 00:30:03.063224    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.185883  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:05 old-k8s-version-280963 kubelet[1066]: E1210 00:30:05.063807    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.186018  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:17 old-k8s-version-280963 kubelet[1066]: E1210 00:30:17.063774    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.186253  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:18 old-k8s-version-280963 kubelet[1066]: E1210 00:30:18.063380    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.186506  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:29 old-k8s-version-280963 kubelet[1066]: E1210 00:30:29.063392    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.186644  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:32 old-k8s-version-280963 kubelet[1066]: E1210 00:30:32.063891    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.186900  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: E1210 00:30:41.063206    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.187083  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.187488  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.187699  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.188009  869958 logs.go:138] Found kubelet problem: Dec 10 00:31:06 old-k8s-version-280963 kubelet[1066]: E1210 00:31:06.063272    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.188179  869958 logs.go:138] Found kubelet problem: Dec 10 00:31:09 old-k8s-version-280963 kubelet[1066]: E1210 00:31:09.063618    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1210 00:31:12.188199  869958 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:31:12.188219  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:31:12.291065  869958 logs.go:123] Gathering logs for kube-proxy [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1] ...
	I1210 00:31:12.291103  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1"
	I1210 00:31:12.325400  869958 logs.go:123] Gathering logs for containerd ...
	I1210 00:31:12.325437  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 00:31:12.385096  869958 logs.go:123] Gathering logs for coredns [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0] ...
	I1210 00:31:12.385143  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0"
	I1210 00:31:12.421781  869958 logs.go:123] Gathering logs for container status ...
	I1210 00:31:12.421815  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:31:12.458769  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:12.458797  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1210 00:31:12.458963  869958 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1210 00:31:12.458980  869958 out.go:270]   Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.458988  869958 out.go:270]   Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	  Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.459000  869958 out.go:270]   Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.459010  869958 out.go:270]   Dec 10 00:31:06 old-k8s-version-280963 kubelet[1066]: E1210 00:31:06.063272    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	  Dec 10 00:31:06 old-k8s-version-280963 kubelet[1066]: E1210 00:31:06.063272    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.459023  869958 out.go:270]   Dec 10 00:31:09 old-k8s-version-280963 kubelet[1066]: E1210 00:31:09.063618    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 10 00:31:09 old-k8s-version-280963 kubelet[1066]: E1210 00:31:09.063618    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1210 00:31:12.459048  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:12.459062  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:31:22.460270  869958 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 00:31:22.467259  869958 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 00:31:22.469499  869958 out.go:201] 
	W1210 00:31:22.470824  869958 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1210 00:31:22.470878  869958 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1210 00:31:22.470901  869958 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1210 00:31:22.470913  869958 out.go:270] * 
	* 
	W1210 00:31:22.472041  869958 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:31:22.473975  869958 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-280963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-280963
helpers_test.go:235: (dbg) docker inspect old-k8s-version-280963:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b9e5f9136a718f848f024d4c77415a39541d5503b6eff1df9f19c9a53ce350a",
	        "Created": "2024-12-10T00:22:28.923428486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 870250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-10T00:25:05.301512011Z",
	            "FinishedAt": "2024-12-10T00:25:04.3636608Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/8b9e5f9136a718f848f024d4c77415a39541d5503b6eff1df9f19c9a53ce350a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b9e5f9136a718f848f024d4c77415a39541d5503b6eff1df9f19c9a53ce350a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b9e5f9136a718f848f024d4c77415a39541d5503b6eff1df9f19c9a53ce350a/hosts",
	        "LogPath": "/var/lib/docker/containers/8b9e5f9136a718f848f024d4c77415a39541d5503b6eff1df9f19c9a53ce350a/8b9e5f9136a718f848f024d4c77415a39541d5503b6eff1df9f19c9a53ce350a-json.log",
	        "Name": "/old-k8s-version-280963",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-280963:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-280963",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0439f432a0d9ef4f820fd26f56c91315ad3e226b5d1f39453458892acb5b101d-init/diff:/var/lib/docker/overlay2/bae8d7d00d99e063ddf62cc977f255b7c2fa4bde63ebe9a612d21991917b231b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0439f432a0d9ef4f820fd26f56c91315ad3e226b5d1f39453458892acb5b101d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0439f432a0d9ef4f820fd26f56c91315ad3e226b5d1f39453458892acb5b101d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0439f432a0d9ef4f820fd26f56c91315ad3e226b5d1f39453458892acb5b101d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-280963",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-280963/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-280963",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-280963",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-280963",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b52493461069750ca2a8a82adb0dacddd1abb3d0678d043741666eb89f68be76",
	            "SandboxKey": "/var/run/docker/netns/b52493461069",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33625"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33626"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33629"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33627"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33628"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-280963": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b7562d0c2da9c6ac6ff2d5d7a94c372cb75974da8a5912a88e6a51c8f16e809a",
	                    "EndpointID": "882b0db8972014e7b722b37ee737c2eef86618053847fbc8cef86c41dd4e66ad",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-280963",
	                        "8b9e5f9136a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280963 -n old-k8s-version-280963
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-280963 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-280963 logs -n 25: (1.215361107s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-757313 image list                          | embed-certs-757313           | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-757313                                  | embed-certs-757313           | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-757313                                  | embed-certs-757313           | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-757313                                  | embed-certs-757313           | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:28 UTC |
	| delete  | -p embed-certs-757313                                  | embed-certs-757313           | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:28 UTC |
	| start   | -p newest-cni-451721 --memory=2200 --alsologtostderr   | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-337138                           | default-k8s-diff-port-337138 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-337138 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | default-k8s-diff-port-337138                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-337138 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | default-k8s-diff-port-337138                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-337138 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | default-k8s-diff-port-337138                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-337138 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | default-k8s-diff-port-337138                           |                              |         |         |                     |                     |
	| image   | no-preload-073501 image list                           | no-preload-073501            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-073501                                   | no-preload-073501            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-073501                                   | no-preload-073501            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-073501                                   | no-preload-073501            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	| delete  | -p no-preload-073501                                   | no-preload-073501            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	| addons  | enable metrics-server -p newest-cni-451721             | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-451721                                   | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-451721                  | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-451721 --memory=2200 --alsologtostderr   | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-451721 image list                           | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-451721                                   | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-451721                                   | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-451721                                   | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	| delete  | -p newest-cni-451721                                   | newest-cni-451721            | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:29:28
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:29:28.052619  887464 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:29:28.052886  887464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:29:28.052895  887464 out.go:358] Setting ErrFile to fd 2...
	I1210 00:29:28.052898  887464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:29:28.053105  887464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1210 00:29:28.053702  887464 out.go:352] Setting JSON to false
	I1210 00:29:28.054886  887464 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":11512,"bootTime":1733779056,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:29:28.054998  887464 start.go:139] virtualization: kvm guest
	I1210 00:29:28.057226  887464 out.go:177] * [newest-cni-451721] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:29:28.058464  887464 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:29:28.058466  887464 notify.go:220] Checking for updates...
	I1210 00:29:28.059653  887464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:29:28.061038  887464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1210 00:29:28.062198  887464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	I1210 00:29:28.063507  887464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:29:28.064754  887464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:29:28.066267  887464 config.go:182] Loaded profile config "newest-cni-451721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:29:28.066916  887464 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:29:28.092208  887464 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1210 00:29:28.092359  887464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:29:28.142299  887464 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-10 00:29:28.132746908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:29:28.142419  887464 docker.go:318] overlay module found
	I1210 00:29:28.144233  887464 out.go:177] * Using the docker driver based on existing profile
	I1210 00:29:28.145516  887464 start.go:297] selected driver: docker
	I1210 00:29:28.145551  887464 start.go:901] validating driver "docker" against &{Name:newest-cni-451721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-451721 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet
: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:29:28.145689  887464 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:29:28.146625  887464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:29:28.194993  887464 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-10 00:29:28.186038818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:29:28.195544  887464 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 00:29:28.195588  887464 cni.go:84] Creating CNI manager for ""
	I1210 00:29:28.195648  887464 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 00:29:28.195721  887464 start.go:340] cluster config:
	{Name:newest-cni-451721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-451721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mo
untString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:29:28.197682  887464 out.go:177] * Starting "newest-cni-451721" primary control-plane node in "newest-cni-451721" cluster
	I1210 00:29:28.198793  887464 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1210 00:29:28.200128  887464 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1210 00:29:28.201266  887464 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1210 00:29:28.201310  887464 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4
	I1210 00:29:28.201308  887464 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1210 00:29:28.201325  887464 cache.go:56] Caching tarball of preloaded images
	I1210 00:29:28.201554  887464 preload.go:172] Found /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 00:29:28.201573  887464 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
	I1210 00:29:28.201742  887464 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/config.json ...
	I1210 00:29:28.225299  887464 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1210 00:29:28.225322  887464 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1210 00:29:28.225339  887464 cache.go:194] Successfully downloaded all kic artifacts
	I1210 00:29:28.225381  887464 start.go:360] acquireMachinesLock for newest-cni-451721: {Name:mk18bdae31d39ddc90280f156a2e9122e8fc8159 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:29:28.225443  887464 start.go:364] duration metric: took 40.017µs to acquireMachinesLock for "newest-cni-451721"
	I1210 00:29:28.225460  887464 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:29:28.225465  887464 fix.go:54] fixHost starting: 
	I1210 00:29:28.225678  887464 cli_runner.go:164] Run: docker container inspect newest-cni-451721 --format={{.State.Status}}
	I1210 00:29:28.243713  887464 fix.go:112] recreateIfNeeded on newest-cni-451721: state=Stopped err=<nil>
	W1210 00:29:28.243748  887464 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:29:28.245424  887464 out.go:177] * Restarting existing docker container for "newest-cni-451721" ...
	I1210 00:29:25.285691  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:27.786048  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:28.246623  887464 cli_runner.go:164] Run: docker start newest-cni-451721
	I1210 00:29:28.522403  887464 cli_runner.go:164] Run: docker container inspect newest-cni-451721 --format={{.State.Status}}
	I1210 00:29:28.541982  887464 kic.go:430] container "newest-cni-451721" state is running.
	I1210 00:29:28.542412  887464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-451721
	I1210 00:29:28.561718  887464 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/config.json ...
	I1210 00:29:28.561949  887464 machine.go:93] provisionDockerMachine start ...
	I1210 00:29:28.562009  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:28.581250  887464 main.go:141] libmachine: Using SSH client type: native
	I1210 00:29:28.581490  887464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 33635 <nil> <nil>}
	I1210 00:29:28.581530  887464 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:29:28.582219  887464 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53622->127.0.0.1:33635: read: connection reset by peer
	I1210 00:29:31.718566  887464 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-451721
	
	I1210 00:29:31.718598  887464 ubuntu.go:169] provisioning hostname "newest-cni-451721"
	I1210 00:29:31.718678  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:31.739094  887464 main.go:141] libmachine: Using SSH client type: native
	I1210 00:29:31.739288  887464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 33635 <nil> <nil>}
	I1210 00:29:31.739302  887464 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-451721 && echo "newest-cni-451721" | sudo tee /etc/hostname
	I1210 00:29:31.878663  887464 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-451721
	
	I1210 00:29:31.878749  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:31.898635  887464 main.go:141] libmachine: Using SSH client type: native
	I1210 00:29:31.898817  887464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 33635 <nil> <nil>}
	I1210 00:29:31.898879  887464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-451721' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-451721/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-451721' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:29:32.031354  887464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:29:32.031382  887464 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20062-527107/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-527107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-527107/.minikube}
	I1210 00:29:32.031413  887464 ubuntu.go:177] setting up certificates
	I1210 00:29:32.031427  887464 provision.go:84] configureAuth start
	I1210 00:29:32.031490  887464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-451721
	I1210 00:29:32.049598  887464 provision.go:143] copyHostCerts
	I1210 00:29:32.049667  887464 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-527107/.minikube/ca.pem, removing ...
	I1210 00:29:32.049688  887464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-527107/.minikube/ca.pem
	I1210 00:29:32.049780  887464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-527107/.minikube/ca.pem (1082 bytes)
	I1210 00:29:32.049924  887464 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-527107/.minikube/cert.pem, removing ...
	I1210 00:29:32.049940  887464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-527107/.minikube/cert.pem
	I1210 00:29:32.049978  887464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-527107/.minikube/cert.pem (1123 bytes)
	I1210 00:29:32.050077  887464 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-527107/.minikube/key.pem, removing ...
	I1210 00:29:32.050090  887464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-527107/.minikube/key.pem
	I1210 00:29:32.050123  887464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-527107/.minikube/key.pem (1679 bytes)
	I1210 00:29:32.050196  887464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-527107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca-key.pem org=jenkins.newest-cni-451721 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-451721]
	I1210 00:29:32.326147  887464 provision.go:177] copyRemoteCerts
	I1210 00:29:32.326235  887464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:29:32.326275  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:32.345347  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:32.440306  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 00:29:32.464567  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:29:32.489137  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 00:29:32.512700  887464 provision.go:87] duration metric: took 481.257458ms to configureAuth
	I1210 00:29:32.512730  887464 ubuntu.go:193] setting minikube options for container-runtime
	I1210 00:29:32.512986  887464 config.go:182] Loaded profile config "newest-cni-451721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:29:32.513003  887464 machine.go:96] duration metric: took 3.951040589s to provisionDockerMachine
	I1210 00:29:32.513012  887464 start.go:293] postStartSetup for "newest-cni-451721" (driver="docker")
	I1210 00:29:32.513029  887464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:29:32.513092  887464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:29:32.513140  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:32.532331  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:32.627992  887464 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:29:32.631492  887464 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 00:29:32.631533  887464 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1210 00:29:32.631543  887464 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1210 00:29:32.631553  887464 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1210 00:29:32.631567  887464 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-527107/.minikube/addons for local assets ...
	I1210 00:29:32.631629  887464 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-527107/.minikube/files for local assets ...
	I1210 00:29:32.631731  887464 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/ssl/certs/5339162.pem -> 5339162.pem in /etc/ssl/certs
	I1210 00:29:32.631856  887464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:29:32.640237  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/ssl/certs/5339162.pem --> /etc/ssl/certs/5339162.pem (1708 bytes)
	I1210 00:29:32.663676  887464 start.go:296] duration metric: took 150.640477ms for postStartSetup
	I1210 00:29:32.663763  887464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:29:32.663811  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:32.682465  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:32.771807  887464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 00:29:32.776386  887464 fix.go:56] duration metric: took 4.550910019s for fixHost
	I1210 00:29:32.776419  887464 start.go:83] releasing machines lock for "newest-cni-451721", held for 4.550965314s
	I1210 00:29:32.776510  887464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-451721
	I1210 00:29:32.796432  887464 ssh_runner.go:195] Run: cat /version.json
	I1210 00:29:32.796486  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:32.796576  887464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:29:32.796638  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:32.816364  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:32.816675  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:32.981168  887464 ssh_runner.go:195] Run: systemctl --version
	I1210 00:29:32.985700  887464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 00:29:32.990213  887464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1210 00:29:33.007987  887464 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1210 00:29:33.008087  887464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:29:33.017598  887464 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 00:29:33.017635  887464 start.go:495] detecting cgroup driver to use...
	I1210 00:29:33.017670  887464 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 00:29:33.017763  887464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 00:29:33.031659  887464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 00:29:33.043074  887464 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:29:33.043144  887464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:29:33.056029  887464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:29:33.067719  887464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:29:33.145800  887464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:29:33.226113  887464 docker.go:233] disabling docker service ...
	I1210 00:29:33.226190  887464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:29:33.240118  887464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:29:33.252946  887464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:29:33.338415  887464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:29:33.424824  887464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:29:33.438244  887464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:29:33.458797  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1210 00:29:33.469817  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 00:29:33.481614  887464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 00:29:33.481680  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 00:29:33.493683  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 00:29:33.504187  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 00:29:33.515028  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 00:29:33.526651  887464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:29:33.537394  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 00:29:33.547664  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 00:29:33.557993  887464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 00:29:33.568674  887464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:29:33.578498  887464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:29:33.588050  887464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:29:33.664722  887464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 00:29:33.770599  887464 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 00:29:33.770674  887464 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 00:29:33.775072  887464 start.go:563] Will wait 60s for crictl version
	I1210 00:29:33.775153  887464 ssh_runner.go:195] Run: which crictl
	I1210 00:29:33.779199  887464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:29:33.816009  887464 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1210 00:29:33.816096  887464 ssh_runner.go:195] Run: containerd --version
	I1210 00:29:33.841408  887464 ssh_runner.go:195] Run: containerd --version
	I1210 00:29:33.868731  887464 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
	I1210 00:29:33.870484  887464 cli_runner.go:164] Run: docker network inspect newest-cni-451721 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 00:29:33.889181  887464 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 00:29:33.893315  887464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:29:33.906416  887464 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 00:29:33.907756  887464 kubeadm.go:883] updating cluster {Name:newest-cni-451721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-451721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:29:33.907938  887464 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1210 00:29:33.908022  887464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:29:33.941650  887464 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 00:29:33.941676  887464 containerd.go:534] Images already preloaded, skipping extraction
	I1210 00:29:33.941733  887464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:29:33.978652  887464 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 00:29:33.978680  887464 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:29:33.978692  887464 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.2 containerd true true} ...
	I1210 00:29:33.978829  887464 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-451721 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-451721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:29:33.978962  887464 ssh_runner.go:195] Run: sudo crictl info
	I1210 00:29:34.014053  887464 cni.go:84] Creating CNI manager for ""
	I1210 00:29:34.014075  887464 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 00:29:34.014085  887464 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1210 00:29:34.014113  887464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-451721 NodeName:newest-cni-451721 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:29:34.014227  887464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-451721"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:29:34.014287  887464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:29:34.023288  887464 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:29:34.023375  887464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:29:34.032305  887464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I1210 00:29:34.049707  887464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:29:34.068733  887464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2497 bytes)
	I1210 00:29:34.086908  887464 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 00:29:34.091099  887464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:29:34.102305  887464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:29:34.181346  887464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:29:34.195374  887464 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721 for IP: 192.168.76.2
	I1210 00:29:34.195403  887464 certs.go:194] generating shared ca certs ...
	I1210 00:29:34.195424  887464 certs.go:226] acquiring lock for ca certs: {Name:mk98ae8901439369b17532a89b5c8e73a55c28a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:29:34.195593  887464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-527107/.minikube/ca.key
	I1210 00:29:34.195656  887464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-527107/.minikube/proxy-client-ca.key
	I1210 00:29:34.195673  887464 certs.go:256] generating profile certs ...
	I1210 00:29:34.195789  887464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/client.key
	I1210 00:29:34.195881  887464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/apiserver.key.9d7ec933
	I1210 00:29:34.195944  887464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/proxy-client.key
	I1210 00:29:34.196096  887464 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/533916.pem (1338 bytes)
	W1210 00:29:34.196135  887464 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-527107/.minikube/certs/533916_empty.pem, impossibly tiny 0 bytes
	I1210 00:29:34.196148  887464 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:29:34.196181  887464 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:29:34.196218  887464 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:29:34.196249  887464 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/certs/key.pem (1679 bytes)
	I1210 00:29:34.196302  887464 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/ssl/certs/5339162.pem (1708 bytes)
	I1210 00:29:34.197251  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:29:34.222765  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:29:34.248903  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:29:34.329829  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 00:29:34.359022  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:29:34.383488  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:29:34.408011  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:29:34.431914  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/newest-cni-451721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:29:34.457411  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/ssl/certs/5339162.pem --> /usr/share/ca-certificates/5339162.pem (1708 bytes)
	I1210 00:29:34.485273  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:29:34.509422  887464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-527107/.minikube/certs/533916.pem --> /usr/share/ca-certificates/533916.pem (1338 bytes)
	I1210 00:29:34.532168  887464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:29:34.549326  887464 ssh_runner.go:195] Run: openssl version
	I1210 00:29:34.554726  887464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/533916.pem && ln -fs /usr/share/ca-certificates/533916.pem /etc/ssl/certs/533916.pem"
	I1210 00:29:34.564061  887464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/533916.pem
	I1210 00:29:34.567795  887464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:51 /usr/share/ca-certificates/533916.pem
	I1210 00:29:34.567868  887464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/533916.pem
	I1210 00:29:34.574598  887464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/533916.pem /etc/ssl/certs/51391683.0"
	I1210 00:29:34.583338  887464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5339162.pem && ln -fs /usr/share/ca-certificates/5339162.pem /etc/ssl/certs/5339162.pem"
	I1210 00:29:34.592492  887464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5339162.pem
	I1210 00:29:34.595953  887464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:51 /usr/share/ca-certificates/5339162.pem
	I1210 00:29:34.596010  887464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5339162.pem
	I1210 00:29:34.602708  887464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5339162.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:29:34.611894  887464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:29:34.621005  887464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:29:34.624374  887464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:29:34.624434  887464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:29:34.630826  887464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:29:34.639526  887464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:29:34.643259  887464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:29:34.649873  887464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:29:34.656724  887464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:29:34.663508  887464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:29:34.670632  887464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:29:34.677777  887464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:29:34.684421  887464 kubeadm.go:392] StartCluster: {Name:newest-cni-451721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-451721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:29:34.684548  887464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 00:29:34.684597  887464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:29:34.720425  887464 cri.go:89] found id: "f45296fe00e1ba175b8a13e74278866fe94fb00fe2e40d2ded889f6424bcd2f6"
	I1210 00:29:34.720455  887464 cri.go:89] found id: "7240dea8c0bf6b0e231897a661459ab1e0f16e81578a218747c803ee5c62c882"
	I1210 00:29:34.720459  887464 cri.go:89] found id: "f10b4481bb2c33d26ae79853dfb882ccfea515ba70ee60ce80479fac7a775d64"
	I1210 00:29:34.720462  887464 cri.go:89] found id: "29bd971069607c5d70cab703e88ecb05ffab6991df6d9130d3abd3f0e48f54fb"
	I1210 00:29:34.720464  887464 cri.go:89] found id: "990a3b2339579634ce6a297e958b50d16dd4d62aa467d77c0f621f7de7b20e5e"
	I1210 00:29:34.720467  887464 cri.go:89] found id: "1cee27fc4429682ec492552b820dbbe6d9126f776c89400dda68d38002ed05d4"
	I1210 00:29:34.720469  887464 cri.go:89] found id: ""
	I1210 00:29:34.720569  887464 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1210 00:29:34.732756  887464 cri.go:116] JSON = null
	W1210 00:29:34.732817  887464 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1210 00:29:34.732887  887464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:29:34.741532  887464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:29:34.741556  887464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:29:34.741611  887464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:29:34.750959  887464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:29:34.751556  887464 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-451721" does not appear in /home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1210 00:29:34.751862  887464 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-527107/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-451721" cluster setting kubeconfig missing "newest-cni-451721" context setting]
	I1210 00:29:34.752462  887464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-527107/kubeconfig: {Name:mk47c0b52ce4821be2777fdd40884aa11f573a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:29:34.754198  887464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:29:34.764702  887464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 00:29:34.764766  887464 kubeadm.go:597] duration metric: took 23.189594ms to restartPrimaryControlPlane
	I1210 00:29:34.764780  887464 kubeadm.go:394] duration metric: took 80.376259ms to StartCluster
	I1210 00:29:34.764801  887464 settings.go:142] acquiring lock: {Name:mk0114e7c414efdfe48670d68c91542cc6018bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:29:34.764879  887464 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1210 00:29:34.765807  887464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-527107/kubeconfig: {Name:mk47c0b52ce4821be2777fdd40884aa11f573a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:29:34.766059  887464 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 00:29:34.766237  887464 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:29:34.766342  887464 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-451721"
	I1210 00:29:34.766386  887464 addons.go:69] Setting default-storageclass=true in profile "newest-cni-451721"
	I1210 00:29:34.766403  887464 addons.go:69] Setting metrics-server=true in profile "newest-cni-451721"
	I1210 00:29:34.766421  887464 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-451721"
	I1210 00:29:34.766434  887464 addons.go:234] Setting addon metrics-server=true in "newest-cni-451721"
	I1210 00:29:34.766434  887464 config.go:182] Loaded profile config "newest-cni-451721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	W1210 00:29:34.766448  887464 addons.go:243] addon metrics-server should already be in state true
	I1210 00:29:34.766434  887464 addons.go:69] Setting dashboard=true in profile "newest-cni-451721"
	I1210 00:29:34.766492  887464 addons.go:234] Setting addon dashboard=true in "newest-cni-451721"
	I1210 00:29:34.766497  887464 host.go:66] Checking if "newest-cni-451721" exists ...
	W1210 00:29:34.766509  887464 addons.go:243] addon dashboard should already be in state true
	I1210 00:29:34.766550  887464 host.go:66] Checking if "newest-cni-451721" exists ...
	I1210 00:29:34.766800  887464 cli_runner.go:164] Run: docker container inspect newest-cni-451721 --format={{.State.Status}}
	I1210 00:29:34.767072  887464 cli_runner.go:164] Run: docker container inspect newest-cni-451721 --format={{.State.Status}}
	I1210 00:29:34.767090  887464 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-451721"
	W1210 00:29:34.767105  887464 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:29:34.767133  887464 host.go:66] Checking if "newest-cni-451721" exists ...
	I1210 00:29:34.767072  887464 cli_runner.go:164] Run: docker container inspect newest-cni-451721 --format={{.State.Status}}
	I1210 00:29:34.767642  887464 cli_runner.go:164] Run: docker container inspect newest-cni-451721 --format={{.State.Status}}
	I1210 00:29:34.775231  887464 out.go:177] * Verifying Kubernetes components...
	I1210 00:29:34.777185  887464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:29:34.796791  887464 addons.go:234] Setting addon default-storageclass=true in "newest-cni-451721"
	W1210 00:29:34.796822  887464 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:29:34.796855  887464 host.go:66] Checking if "newest-cni-451721" exists ...
	I1210 00:29:34.797358  887464 cli_runner.go:164] Run: docker container inspect newest-cni-451721 --format={{.State.Status}}
	I1210 00:29:34.803100  887464 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 00:29:34.803250  887464 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:29:34.803345  887464 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:29:34.804400  887464 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:29:34.805594  887464 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:29:34.805664  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:34.806737  887464 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1210 00:29:30.284394  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:32.285567  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:34.789978  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:34.806919  887464 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:29:34.806941  887464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:29:34.806995  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:34.810379  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 00:29:34.810407  887464 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 00:29:34.810471  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:34.822121  887464 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:29:34.822146  887464 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:29:34.822211  887464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-451721
	I1210 00:29:34.832359  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:34.837230  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:34.848827  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:34.866146  887464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/newest-cni-451721/id_rsa Username:docker}
	I1210 00:29:35.053589  887464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:29:35.130427  887464 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:29:35.130519  887464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:29:35.155921  887464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:29:35.228590  887464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:29:35.230681  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 00:29:35.230714  887464 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 00:29:35.234672  887464 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:29:35.234700  887464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:29:35.259848  887464 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:29:35.259882  887464 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:29:35.327740  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 00:29:35.327780  887464 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 00:29:35.433374  887464 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:29:35.433420  887464 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:29:35.439815  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 00:29:35.439855  887464 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 00:29:35.546184  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 00:29:35.546214  887464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 00:29:35.547308  887464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:29:35.630764  887464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 00:29:35.645619  887464 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 00:29:35.645675  887464 retry.go:31] will retry after 353.055628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 00:29:35.645766  887464 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 00:29:35.645792  887464 retry.go:31] will retry after 311.554382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 00:29:35.648495  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 00:29:35.648527  887464 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 00:29:35.746716  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 00:29:35.746752  887464 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 00:29:35.834393  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 00:29:35.834427  887464 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 00:29:35.858323  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 00:29:35.858353  887464 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 00:29:35.939882  887464 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 00:29:35.939911  887464 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 00:29:35.957491  887464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 00:29:35.957605  887464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:29:35.999237  887464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:29:37.285571  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:39.785467  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:40.933317  887464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.385951285s)
	I1210 00:29:40.933376  887464 addons.go:475] Verifying addon metrics-server=true in "newest-cni-451721"
	I1210 00:29:40.933328  887464 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.302534863s)
	I1210 00:29:40.933410  887464 api_server.go:72] duration metric: took 6.167314054s to wait for apiserver process to appear ...
	I1210 00:29:40.933423  887464 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:29:40.933446  887464 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 00:29:40.938900  887464 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 00:29:40.945648  887464 api_server.go:141] control plane version: v1.31.2
	I1210 00:29:40.945689  887464 api_server.go:131] duration metric: took 12.258143ms to wait for apiserver health ...
	I1210 00:29:40.945702  887464 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:29:40.955030  887464 system_pods.go:59] 9 kube-system pods found
	I1210 00:29:40.955077  887464 system_pods.go:61] "coredns-7c65d6cfc9-4g9ws" [d880636c-3f52-4266-b53b-588922ffa1a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:29:40.955107  887464 system_pods.go:61] "etcd-newest-cni-451721" [23b0abb1-873a-4c7e-9b2d-d37c891d429b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:29:40.955119  887464 system_pods.go:61] "kindnet-bgv7c" [9a246ce8-7f88-4a61-99b8-5003d2988222] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 00:29:40.955138  887464 system_pods.go:61] "kube-apiserver-newest-cni-451721" [0345a79b-8c71-45ac-b297-08882c7c9420] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:29:40.955150  887464 system_pods.go:61] "kube-controller-manager-newest-cni-451721" [5775aac2-64af-4cbb-a8b9-93c7f5c9f7d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:29:40.955156  887464 system_pods.go:61] "kube-proxy-6xl4q" [c7d3ada4-2dd1-433a-a2ec-93f2633cca61] Running
	I1210 00:29:40.955165  887464 system_pods.go:61] "kube-scheduler-newest-cni-451721" [4c415b07-95d5-47f6-b7f3-ed552712e94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:29:40.955177  887464 system_pods.go:61] "metrics-server-6867b74b74-zftqz" [b1947c40-5f55-435f-ba46-caae73951f90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:29:40.955186  887464 system_pods.go:61] "storage-provisioner" [a9dbb794-3767-4fc5-8ea5-4fffbb6105d3] Running
	I1210 00:29:40.955195  887464 system_pods.go:74] duration metric: took 9.485295ms to wait for pod list to return data ...
	I1210 00:29:40.955209  887464 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:29:40.958794  887464 default_sa.go:45] found service account: "default"
	I1210 00:29:40.958827  887464 default_sa.go:55] duration metric: took 3.606033ms for default service account to be created ...
	I1210 00:29:40.958889  887464 kubeadm.go:582] duration metric: took 6.192791752s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 00:29:40.958912  887464 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:29:40.962661  887464 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 00:29:40.962695  887464 node_conditions.go:123] node cpu capacity is 8
	I1210 00:29:40.962721  887464 node_conditions.go:105] duration metric: took 3.791678ms to run NodePressure ...
	I1210 00:29:40.962735  887464 start.go:241] waiting for startup goroutines ...
	I1210 00:29:41.042991  887464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.085448346s)
	I1210 00:29:41.043039  887464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.085379809s)
	I1210 00:29:41.043119  887464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.043845229s)
	I1210 00:29:41.044774  887464 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-451721 addons enable metrics-server
	
	I1210 00:29:41.049622  887464 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1210 00:29:41.051006  887464 addons.go:510] duration metric: took 6.284770091s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1210 00:29:41.051059  887464 start.go:246] waiting for cluster config update ...
	I1210 00:29:41.051074  887464 start.go:255] writing updated cluster config ...
	I1210 00:29:41.051387  887464 ssh_runner.go:195] Run: rm -f paused
	I1210 00:29:41.102339  887464 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:29:41.104014  887464 out.go:177] * Done! kubectl is now configured to use "newest-cni-451721" cluster and "default" namespace by default
	I1210 00:29:41.786195  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:44.285629  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:46.286997  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:48.784966  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:50.785565  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:53.287484  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:55.785238  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:29:58.285947  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:00.784947  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:02.785627  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:05.285425  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:07.785233  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:10.285053  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:12.785346  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:15.284585  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:17.285141  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:19.785686  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:21.786049  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:24.285409  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:26.286022  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:28.785097  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:30.785142  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:33.285545  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:35.785510  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:38.284978  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:40.784638  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:42.784707  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:44.784825  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:47.285489  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:49.286104  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:51.785143  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:53.786017  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:56.285495  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:30:58.285964  869958 pod_ready.go:103] pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace has status "Ready":"False"
	I1210 00:31:00.285755  869958 pod_ready.go:82] duration metric: took 4m0.006380848s for pod "metrics-server-9975d5f86-9wg6p" in "kube-system" namespace to be "Ready" ...
	E1210 00:31:00.285781  869958 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 00:31:00.285790  869958 pod_ready.go:39] duration metric: took 5m30.751897187s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:31:00.285822  869958 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:31:00.285858  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:31:00.285917  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:31:00.324417  869958 cri.go:89] found id: "9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d"
	I1210 00:31:00.324440  869958 cri.go:89] found id: ""
	I1210 00:31:00.324448  869958 logs.go:282] 1 containers: [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d]
	I1210 00:31:00.324499  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.328595  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 00:31:00.328691  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:31:00.364828  869958 cri.go:89] found id: "de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2"
	I1210 00:31:00.364857  869958 cri.go:89] found id: ""
	I1210 00:31:00.364868  869958 logs.go:282] 1 containers: [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2]
	I1210 00:31:00.364938  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.368615  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 00:31:00.368696  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:31:00.403140  869958 cri.go:89] found id: "d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0"
	I1210 00:31:00.403164  869958 cri.go:89] found id: ""
	I1210 00:31:00.403174  869958 logs.go:282] 1 containers: [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0]
	I1210 00:31:00.403233  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.406693  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:31:00.406754  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:31:00.440261  869958 cri.go:89] found id: "e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07"
	I1210 00:31:00.440286  869958 cri.go:89] found id: ""
	I1210 00:31:00.440294  869958 logs.go:282] 1 containers: [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07]
	I1210 00:31:00.440356  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.443836  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:31:00.443908  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:31:00.478920  869958 cri.go:89] found id: "930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1"
	I1210 00:31:00.478945  869958 cri.go:89] found id: ""
	I1210 00:31:00.478955  869958 logs.go:282] 1 containers: [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1]
	I1210 00:31:00.479020  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.482648  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:31:00.482713  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:31:00.517931  869958 cri.go:89] found id: "7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82"
	I1210 00:31:00.517959  869958 cri.go:89] found id: ""
	I1210 00:31:00.517969  869958 logs.go:282] 1 containers: [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82]
	I1210 00:31:00.518027  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.522393  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 00:31:00.522470  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:31:00.558076  869958 cri.go:89] found id: "1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a"
	I1210 00:31:00.558099  869958 cri.go:89] found id: ""
	I1210 00:31:00.558107  869958 logs.go:282] 1 containers: [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a]
	I1210 00:31:00.558159  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.561741  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:31:00.561812  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:31:00.598626  869958 cri.go:89] found id: "b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e"
	I1210 00:31:00.598664  869958 cri.go:89] found id: "5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24"
	I1210 00:31:00.598674  869958 cri.go:89] found id: ""
	I1210 00:31:00.598682  869958 logs.go:282] 2 containers: [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e 5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24]
	I1210 00:31:00.598746  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.602345  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.605648  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:31:00.605713  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:31:00.638537  869958 cri.go:89] found id: "71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e"
	I1210 00:31:00.638564  869958 cri.go:89] found id: ""
	I1210 00:31:00.638574  869958 logs.go:282] 1 containers: [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e]
	I1210 00:31:00.638635  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:00.642267  869958 logs.go:123] Gathering logs for kubelet ...
	I1210 00:31:00.642297  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 00:31:00.684072  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:36 old-k8s-version-280963 kubelet[1066]: E1210 00:25:36.013371    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.684251  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:36 old-k8s-version-280963 kubelet[1066]: E1210 00:25:36.276847    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.686239  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:49 old-k8s-version-280963 kubelet[1066]: E1210 00:25:49.092116    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.687741  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:00 old-k8s-version-280963 kubelet[1066]: E1210 00:26:00.341400    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.687978  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:01 old-k8s-version-280963 kubelet[1066]: E1210 00:26:01.348361    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.688111  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:02 old-k8s-version-280963 kubelet[1066]: E1210 00:26:02.063829    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.688445  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:02 old-k8s-version-280963 kubelet[1066]: E1210 00:26:02.351893    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.690436  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:14 old-k8s-version-280963 kubelet[1066]: E1210 00:26:14.082796    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.691134  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:16 old-k8s-version-280963 kubelet[1066]: E1210 00:26:16.385007    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.691375  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:20 old-k8s-version-280963 kubelet[1066]: E1210 00:26:20.929425    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.691523  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:25 old-k8s-version-280963 kubelet[1066]: E1210 00:26:25.063820    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.691758  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:32 old-k8s-version-280963 kubelet[1066]: E1210 00:26:32.063614    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.691889  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:37 old-k8s-version-280963 kubelet[1066]: E1210 00:26:37.063859    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.692313  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:45 old-k8s-version-280963 kubelet[1066]: E1210 00:26:45.451805    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.692572  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:50 old-k8s-version-280963 kubelet[1066]: E1210 00:26:50.929571    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.692717  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:51 old-k8s-version-280963 kubelet[1066]: E1210 00:26:51.063691    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.692950  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:04 old-k8s-version-280963 kubelet[1066]: E1210 00:27:04.063486    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.694659  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:06 old-k8s-version-280963 kubelet[1066]: E1210 00:27:06.100485    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.694960  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:16 old-k8s-version-280963 kubelet[1066]: E1210 00:27:16.063301    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.695110  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:18 old-k8s-version-280963 kubelet[1066]: E1210 00:27:18.063936    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.695245  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:29 old-k8s-version-280963 kubelet[1066]: E1210 00:27:29.063910    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.695668  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:30 old-k8s-version-280963 kubelet[1066]: E1210 00:27:30.551122    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.695901  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:31 old-k8s-version-280963 kubelet[1066]: E1210 00:27:31.554624    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.696137  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:42 old-k8s-version-280963 kubelet[1066]: E1210 00:27:42.063651    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.696291  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:43 old-k8s-version-280963 kubelet[1066]: E1210 00:27:43.063770    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.696535  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:54 old-k8s-version-280963 kubelet[1066]: E1210 00:27:54.063558    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.696667  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:55 old-k8s-version-280963 kubelet[1066]: E1210 00:27:55.063561    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.696899  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:05 old-k8s-version-280963 kubelet[1066]: E1210 00:28:05.063379    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.697036  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:10 old-k8s-version-280963 kubelet[1066]: E1210 00:28:10.063837    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.697268  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:18 old-k8s-version-280963 kubelet[1066]: E1210 00:28:18.063477    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.697399  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:23 old-k8s-version-280963 kubelet[1066]: E1210 00:28:23.063704    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.697631  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:29 old-k8s-version-280963 kubelet[1066]: E1210 00:28:29.063218    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.699384  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:36 old-k8s-version-280963 kubelet[1066]: E1210 00:28:36.089234    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:00.699619  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:40 old-k8s-version-280963 kubelet[1066]: E1210 00:28:40.063266    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.699750  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:50 old-k8s-version-280963 kubelet[1066]: E1210 00:28:50.063870    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.700169  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:55 old-k8s-version-280963 kubelet[1066]: E1210 00:28:55.726230    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.700403  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:00 old-k8s-version-280963 kubelet[1066]: E1210 00:29:00.929346    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.700534  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:01 old-k8s-version-280963 kubelet[1066]: E1210 00:29:01.063931    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.700665  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:12 old-k8s-version-280963 kubelet[1066]: E1210 00:29:12.063883    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.700897  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:13 old-k8s-version-280963 kubelet[1066]: E1210 00:29:13.063415    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.701157  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:25 old-k8s-version-280963 kubelet[1066]: E1210 00:29:25.063471    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.701316  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:25 old-k8s-version-280963 kubelet[1066]: E1210 00:29:25.063913    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.701550  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:36 old-k8s-version-280963 kubelet[1066]: E1210 00:29:36.063693    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.701682  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:36 old-k8s-version-280963 kubelet[1066]: E1210 00:29:36.063872    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.701914  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:49 old-k8s-version-280963 kubelet[1066]: E1210 00:29:49.063306    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.702050  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:50 old-k8s-version-280963 kubelet[1066]: E1210 00:29:50.063838    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.702287  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:03 old-k8s-version-280963 kubelet[1066]: E1210 00:30:03.063224    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.702419  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:05 old-k8s-version-280963 kubelet[1066]: E1210 00:30:05.063807    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.702550  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:17 old-k8s-version-280963 kubelet[1066]: E1210 00:30:17.063774    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.702784  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:18 old-k8s-version-280963 kubelet[1066]: E1210 00:30:18.063380    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.703080  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:29 old-k8s-version-280963 kubelet[1066]: E1210 00:30:29.063392    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.703219  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:32 old-k8s-version-280963 kubelet[1066]: E1210 00:30:32.063891    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.703456  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: E1210 00:30:41.063206    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.703587  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:00.703818  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:00.703952  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1210 00:31:00.703964  869958 logs.go:123] Gathering logs for kube-proxy [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1] ...
	I1210 00:31:00.703989  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1"
	I1210 00:31:00.739278  869958 logs.go:123] Gathering logs for kube-controller-manager [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82] ...
	I1210 00:31:00.739323  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82"
	I1210 00:31:00.809749  869958 logs.go:123] Gathering logs for storage-provisioner [5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24] ...
	I1210 00:31:00.809793  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24"
	I1210 00:31:00.843898  869958 logs.go:123] Gathering logs for containerd ...
	I1210 00:31:00.843932  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 00:31:00.905189  869958 logs.go:123] Gathering logs for kube-apiserver [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d] ...
	I1210 00:31:00.905248  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d"
	I1210 00:31:00.975171  869958 logs.go:123] Gathering logs for kube-scheduler [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07] ...
	I1210 00:31:00.975214  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07"
	I1210 00:31:01.018685  869958 logs.go:123] Gathering logs for kubernetes-dashboard [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e] ...
	I1210 00:31:01.018727  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e"
	I1210 00:31:01.055194  869958 logs.go:123] Gathering logs for dmesg ...
	I1210 00:31:01.055228  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:31:01.082490  869958 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:31:01.082531  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:31:01.188477  869958 logs.go:123] Gathering logs for etcd [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2] ...
	I1210 00:31:01.188515  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2"
	I1210 00:31:01.231162  869958 logs.go:123] Gathering logs for kindnet [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a] ...
	I1210 00:31:01.231200  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a"
	I1210 00:31:01.270495  869958 logs.go:123] Gathering logs for storage-provisioner [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e] ...
	I1210 00:31:01.270532  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e"
	I1210 00:31:01.304676  869958 logs.go:123] Gathering logs for container status ...
	I1210 00:31:01.304717  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:31:01.342082  869958 logs.go:123] Gathering logs for coredns [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0] ...
	I1210 00:31:01.342114  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0"
	I1210 00:31:01.377229  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:01.377257  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1210 00:31:01.377336  869958 out.go:270] X Problems detected in kubelet:
	W1210 00:31:01.377354  869958 out.go:270]   Dec 10 00:30:32 old-k8s-version-280963 kubelet[1066]: E1210 00:30:32.063891    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:01.377363  869958 out.go:270]   Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: E1210 00:30:41.063206    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:01.377375  869958 out.go:270]   Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:01.377384  869958 out.go:270]   Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:01.377397  869958 out.go:270]   Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1210 00:31:01.377405  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:01.377416  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:31:11.378266  869958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:31:11.390994  869958 api_server.go:72] duration metric: took 5m52.278015509s to wait for apiserver process to appear ...
	I1210 00:31:11.391028  869958 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:31:11.391084  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:31:11.391155  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:31:11.425078  869958 cri.go:89] found id: "9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d"
	I1210 00:31:11.425104  869958 cri.go:89] found id: ""
	I1210 00:31:11.425113  869958 logs.go:282] 1 containers: [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d]
	I1210 00:31:11.425183  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.428759  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 00:31:11.428836  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:31:11.463276  869958 cri.go:89] found id: "de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2"
	I1210 00:31:11.463305  869958 cri.go:89] found id: ""
	I1210 00:31:11.463313  869958 logs.go:282] 1 containers: [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2]
	I1210 00:31:11.463360  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.467102  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 00:31:11.467171  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:31:11.503957  869958 cri.go:89] found id: "d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0"
	I1210 00:31:11.504006  869958 cri.go:89] found id: ""
	I1210 00:31:11.504016  869958 logs.go:282] 1 containers: [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0]
	I1210 00:31:11.504079  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.507966  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:31:11.508041  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:31:11.542392  869958 cri.go:89] found id: "e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07"
	I1210 00:31:11.542415  869958 cri.go:89] found id: ""
	I1210 00:31:11.542422  869958 logs.go:282] 1 containers: [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07]
	I1210 00:31:11.542484  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.546043  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:31:11.546105  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:31:11.583274  869958 cri.go:89] found id: "930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1"
	I1210 00:31:11.583305  869958 cri.go:89] found id: ""
	I1210 00:31:11.583316  869958 logs.go:282] 1 containers: [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1]
	I1210 00:31:11.583376  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.587533  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:31:11.587622  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:31:11.622287  869958 cri.go:89] found id: "7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82"
	I1210 00:31:11.622329  869958 cri.go:89] found id: ""
	I1210 00:31:11.622338  869958 logs.go:282] 1 containers: [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82]
	I1210 00:31:11.622399  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.626227  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 00:31:11.626300  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:31:11.661096  869958 cri.go:89] found id: "1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a"
	I1210 00:31:11.661119  869958 cri.go:89] found id: ""
	I1210 00:31:11.661126  869958 logs.go:282] 1 containers: [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a]
	I1210 00:31:11.661173  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.664907  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:31:11.664974  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:31:11.701413  869958 cri.go:89] found id: "71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e"
	I1210 00:31:11.701439  869958 cri.go:89] found id: ""
	I1210 00:31:11.701448  869958 logs.go:282] 1 containers: [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e]
	I1210 00:31:11.701498  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.705199  869958 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:31:11.705268  869958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:31:11.739637  869958 cri.go:89] found id: "b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e"
	I1210 00:31:11.739669  869958 cri.go:89] found id: "5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24"
	I1210 00:31:11.739674  869958 cri.go:89] found id: ""
	I1210 00:31:11.739682  869958 logs.go:282] 2 containers: [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e 5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24]
	I1210 00:31:11.739748  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.743857  869958 ssh_runner.go:195] Run: which crictl
	I1210 00:31:11.747864  869958 logs.go:123] Gathering logs for kube-scheduler [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07] ...
	I1210 00:31:11.747897  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07"
	I1210 00:31:11.787539  869958 logs.go:123] Gathering logs for kube-controller-manager [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82] ...
	I1210 00:31:11.787577  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82"
	I1210 00:31:11.854239  869958 logs.go:123] Gathering logs for kubernetes-dashboard [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e] ...
	I1210 00:31:11.854286  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e"
	I1210 00:31:11.890628  869958 logs.go:123] Gathering logs for storage-provisioner [5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24] ...
	I1210 00:31:11.890659  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24"
	I1210 00:31:11.924933  869958 logs.go:123] Gathering logs for dmesg ...
	I1210 00:31:11.924977  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:31:11.952597  869958 logs.go:123] Gathering logs for kube-apiserver [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d] ...
	I1210 00:31:11.952639  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d"
	I1210 00:31:12.008186  869958 logs.go:123] Gathering logs for etcd [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2] ...
	I1210 00:31:12.008225  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2"
	I1210 00:31:12.050981  869958 logs.go:123] Gathering logs for kindnet [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a] ...
	I1210 00:31:12.051019  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a"
	I1210 00:31:12.092306  869958 logs.go:123] Gathering logs for storage-provisioner [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e] ...
	I1210 00:31:12.092348  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e"
	I1210 00:31:12.126824  869958 logs.go:123] Gathering logs for kubelet ...
	I1210 00:31:12.126877  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 00:31:12.167149  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:36 old-k8s-version-280963 kubelet[1066]: E1210 00:25:36.013371    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.167339  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:36 old-k8s-version-280963 kubelet[1066]: E1210 00:25:36.276847    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.169400  869958 logs.go:138] Found kubelet problem: Dec 10 00:25:49 old-k8s-version-280963 kubelet[1066]: E1210 00:25:49.092116    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.170983  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:00 old-k8s-version-280963 kubelet[1066]: E1210 00:26:00.341400    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.171225  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:01 old-k8s-version-280963 kubelet[1066]: E1210 00:26:01.348361    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.171364  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:02 old-k8s-version-280963 kubelet[1066]: E1210 00:26:02.063829    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.171704  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:02 old-k8s-version-280963 kubelet[1066]: E1210 00:26:02.351893    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.173755  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:14 old-k8s-version-280963 kubelet[1066]: E1210 00:26:14.082796    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.174505  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:16 old-k8s-version-280963 kubelet[1066]: E1210 00:26:16.385007    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.174745  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:20 old-k8s-version-280963 kubelet[1066]: E1210 00:26:20.929425    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.174905  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:25 old-k8s-version-280963 kubelet[1066]: E1210 00:26:25.063820    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.175143  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:32 old-k8s-version-280963 kubelet[1066]: E1210 00:26:32.063614    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.175321  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:37 old-k8s-version-280963 kubelet[1066]: E1210 00:26:37.063859    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.175745  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:45 old-k8s-version-280963 kubelet[1066]: E1210 00:26:45.451805    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.175980  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:50 old-k8s-version-280963 kubelet[1066]: E1210 00:26:50.929571    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.176120  869958 logs.go:138] Found kubelet problem: Dec 10 00:26:51 old-k8s-version-280963 kubelet[1066]: E1210 00:26:51.063691    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.176358  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:04 old-k8s-version-280963 kubelet[1066]: E1210 00:27:04.063486    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.178089  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:06 old-k8s-version-280963 kubelet[1066]: E1210 00:27:06.100485    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.178356  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:16 old-k8s-version-280963 kubelet[1066]: E1210 00:27:16.063301    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.178495  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:18 old-k8s-version-280963 kubelet[1066]: E1210 00:27:18.063936    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.178628  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:29 old-k8s-version-280963 kubelet[1066]: E1210 00:27:29.063910    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.179087  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:30 old-k8s-version-280963 kubelet[1066]: E1210 00:27:30.551122    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.179328  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:31 old-k8s-version-280963 kubelet[1066]: E1210 00:27:31.554624    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.179563  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:42 old-k8s-version-280963 kubelet[1066]: E1210 00:27:42.063651    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.179696  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:43 old-k8s-version-280963 kubelet[1066]: E1210 00:27:43.063770    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.179934  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:54 old-k8s-version-280963 kubelet[1066]: E1210 00:27:54.063558    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.180068  869958 logs.go:138] Found kubelet problem: Dec 10 00:27:55 old-k8s-version-280963 kubelet[1066]: E1210 00:27:55.063561    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.180308  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:05 old-k8s-version-280963 kubelet[1066]: E1210 00:28:05.063379    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.180463  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:10 old-k8s-version-280963 kubelet[1066]: E1210 00:28:10.063837    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.180701  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:18 old-k8s-version-280963 kubelet[1066]: E1210 00:28:18.063477    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.180836  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:23 old-k8s-version-280963 kubelet[1066]: E1210 00:28:23.063704    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.181073  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:29 old-k8s-version-280963 kubelet[1066]: E1210 00:28:29.063218    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.182823  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:36 old-k8s-version-280963 kubelet[1066]: E1210 00:28:36.089234    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1210 00:31:12.183092  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:40 old-k8s-version-280963 kubelet[1066]: E1210 00:28:40.063266    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.183227  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:50 old-k8s-version-280963 kubelet[1066]: E1210 00:28:50.063870    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.183655  869958 logs.go:138] Found kubelet problem: Dec 10 00:28:55 old-k8s-version-280963 kubelet[1066]: E1210 00:28:55.726230    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.183890  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:00 old-k8s-version-280963 kubelet[1066]: E1210 00:29:00.929346    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.184024  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:01 old-k8s-version-280963 kubelet[1066]: E1210 00:29:01.063931    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.184157  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:12 old-k8s-version-280963 kubelet[1066]: E1210 00:29:12.063883    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.184400  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:13 old-k8s-version-280963 kubelet[1066]: E1210 00:29:13.063415    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.184636  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:25 old-k8s-version-280963 kubelet[1066]: E1210 00:29:25.063471    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.184769  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:25 old-k8s-version-280963 kubelet[1066]: E1210 00:29:25.063913    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.185004  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:36 old-k8s-version-280963 kubelet[1066]: E1210 00:29:36.063693    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.185138  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:36 old-k8s-version-280963 kubelet[1066]: E1210 00:29:36.063872    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.185382  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:49 old-k8s-version-280963 kubelet[1066]: E1210 00:29:49.063306    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.185515  869958 logs.go:138] Found kubelet problem: Dec 10 00:29:50 old-k8s-version-280963 kubelet[1066]: E1210 00:29:50.063838    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.185750  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:03 old-k8s-version-280963 kubelet[1066]: E1210 00:30:03.063224    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.185883  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:05 old-k8s-version-280963 kubelet[1066]: E1210 00:30:05.063807    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.186018  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:17 old-k8s-version-280963 kubelet[1066]: E1210 00:30:17.063774    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.186253  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:18 old-k8s-version-280963 kubelet[1066]: E1210 00:30:18.063380    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.186506  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:29 old-k8s-version-280963 kubelet[1066]: E1210 00:30:29.063392    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.186644  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:32 old-k8s-version-280963 kubelet[1066]: E1210 00:30:32.063891    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.186900  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: E1210 00:30:41.063206    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.187083  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.187488  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.187699  869958 logs.go:138] Found kubelet problem: Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.188009  869958 logs.go:138] Found kubelet problem: Dec 10 00:31:06 old-k8s-version-280963 kubelet[1066]: E1210 00:31:06.063272    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.188179  869958 logs.go:138] Found kubelet problem: Dec 10 00:31:09 old-k8s-version-280963 kubelet[1066]: E1210 00:31:09.063618    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1210 00:31:12.188199  869958 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:31:12.188219  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:31:12.291065  869958 logs.go:123] Gathering logs for kube-proxy [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1] ...
	I1210 00:31:12.291103  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1"
	I1210 00:31:12.325400  869958 logs.go:123] Gathering logs for containerd ...
	I1210 00:31:12.325437  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 00:31:12.385096  869958 logs.go:123] Gathering logs for coredns [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0] ...
	I1210 00:31:12.385143  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0"
	I1210 00:31:12.421781  869958 logs.go:123] Gathering logs for container status ...
	I1210 00:31:12.421815  869958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:31:12.458769  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:12.458797  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1210 00:31:12.458963  869958 out.go:270] X Problems detected in kubelet:
	W1210 00:31:12.458980  869958 out.go:270]   Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.458988  869958 out.go:270]   Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.459000  869958 out.go:270]   Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1210 00:31:12.459010  869958 out.go:270]   Dec 10 00:31:06 old-k8s-version-280963 kubelet[1066]: E1210 00:31:06.063272    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	W1210 00:31:12.459023  869958 out.go:270]   Dec 10 00:31:09 old-k8s-version-280963 kubelet[1066]: E1210 00:31:09.063618    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1210 00:31:12.459048  869958 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:12.459062  869958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:31:22.460270  869958 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 00:31:22.467259  869958 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 00:31:22.469499  869958 out.go:201] 
	W1210 00:31:22.470824  869958 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1210 00:31:22.470878  869958 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1210 00:31:22.470901  869958 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1210 00:31:22.470913  869958 out.go:270] * 
	W1210 00:31:22.472041  869958 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:31:22.473975  869958 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	bb6919a8e0b69       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   c9873f403fe96       dashboard-metrics-scraper-8d5bb5db8-h78fg
	b9cf656a0d778       6e38f40d628db       5 minutes ago       Running             storage-provisioner         1                   8c40f75d8ca10       storage-provisioner
	71efdd7b3ff73       07655ddf2eebe       5 minutes ago       Running             kubernetes-dashboard        0                   24a5323a71f00       kubernetes-dashboard-cd95d586-fnf8h
	5b9539c213bd4       56cc512116c8f       5 minutes ago       Running             busybox                     0                   1ffdf86be7d65       busybox
	1436cfab0a611       50415e5d05f05       5 minutes ago       Running             kindnet-cni                 0                   56dacd8cdace0       kindnet-bx7xb
	d00996a438004       bfe3a36ebd252       5 minutes ago       Running             coredns                     0                   85216c9f961a1       coredns-74ff55c5b-45ksb
	5ef64915e71a0       6e38f40d628db       5 minutes ago       Exited              storage-provisioner         0                   8c40f75d8ca10       storage-provisioner
	930a4290304a3       10cc881966cfd       5 minutes ago       Running             kube-proxy                  0                   d4c5cdc8681e7       kube-proxy-qb2z4
	e24a785fdbd96       3138b6e3d4712       5 minutes ago       Running             kube-scheduler              0                   35be6dcb0ff8e       kube-scheduler-old-k8s-version-280963
	7f21b5ae0b202       b9fa1895dcaa6       5 minutes ago       Running             kube-controller-manager     0                   20d45a0b80ac3       kube-controller-manager-old-k8s-version-280963
	9be25993b65b8       ca9843d3b5454       5 minutes ago       Running             kube-apiserver              0                   c5f8af6106911       kube-apiserver-old-k8s-version-280963
	de4e779f2f1e9       0369cf4303ffd       5 minutes ago       Running             etcd                        0                   d41231201c474       etcd-old-k8s-version-280963
	
	
	==> containerd <==
	Dec 10 00:27:30 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:27:30.077134275Z" level=info msg="CreateContainer within sandbox \"c9873f403fe9640d8069f4b43574c6195d7c72df3d5147c848c3ef8522e4cf5e\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"80a0427030a2cdb7c62305d17ec00897b150478fa62162fe0f460b5145f028ca\""
	Dec 10 00:27:30 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:27:30.077643569Z" level=info msg="StartContainer for \"80a0427030a2cdb7c62305d17ec00897b150478fa62162fe0f460b5145f028ca\""
	Dec 10 00:27:30 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:27:30.146335195Z" level=info msg="StartContainer for \"80a0427030a2cdb7c62305d17ec00897b150478fa62162fe0f460b5145f028ca\" returns successfully"
	Dec 10 00:27:30 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:27:30.181532281Z" level=info msg="shim disconnected" id=80a0427030a2cdb7c62305d17ec00897b150478fa62162fe0f460b5145f028ca namespace=k8s.io
	Dec 10 00:27:30 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:27:30.181608443Z" level=warning msg="cleaning up after shim disconnected" id=80a0427030a2cdb7c62305d17ec00897b150478fa62162fe0f460b5145f028ca namespace=k8s.io
	Dec 10 00:27:30 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:27:30.181620431Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 10 00:27:30 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:27:30.551893217Z" level=info msg="RemoveContainer for \"17eb23f620f4e02ab9a50e85e472ab8e63b5ec7c0d8d67c3004de860409899b3\""
	Dec 10 00:27:30 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:27:30.557563930Z" level=info msg="RemoveContainer for \"17eb23f620f4e02ab9a50e85e472ab8e63b5ec7c0d8d67c3004de860409899b3\" returns successfully"
	Dec 10 00:28:36 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:36.063856259Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:28:36 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:36.087307580Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Dec 10 00:28:36 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:36.088712729Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Dec 10 00:28:36 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:36.088793391Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.065158341Z" level=info msg="CreateContainer within sandbox \"c9873f403fe9640d8069f4b43574c6195d7c72df3d5147c848c3ef8522e4cf5e\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.078186852Z" level=info msg="CreateContainer within sandbox \"c9873f403fe9640d8069f4b43574c6195d7c72df3d5147c848c3ef8522e4cf5e\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1\""
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.078991540Z" level=info msg="StartContainer for \"bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1\""
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.172655371Z" level=info msg="StartContainer for \"bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1\" returns successfully"
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.225267252Z" level=info msg="shim disconnected" id=bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1 namespace=k8s.io
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.225357041Z" level=warning msg="cleaning up after shim disconnected" id=bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1 namespace=k8s.io
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.225368648Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.727577708Z" level=info msg="RemoveContainer for \"80a0427030a2cdb7c62305d17ec00897b150478fa62162fe0f460b5145f028ca\""
	Dec 10 00:28:55 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:28:55.732378341Z" level=info msg="RemoveContainer for \"80a0427030a2cdb7c62305d17ec00897b150478fa62162fe0f460b5145f028ca\" returns successfully"
	Dec 10 00:31:21 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:31:21.067930147Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:31:21 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:31:21.092349255Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Dec 10 00:31:21 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:31:21.093804351Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Dec 10 00:31:21 old-k8s-version-280963 containerd[688]: time="2024-12-10T00:31:21.093868542Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [d00996a4380040decb2e6f3c9bcc65ff7f12c74a6f6817167177166144b883f0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:33064 - 10903 "HINFO IN 842193102946151339.216109792973782087. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010806784s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49030 - 26184 "HINFO IN 5024412679339346096.4343172623692224972. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006611131s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1210 00:26:01.758288       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-10 00:25:31.757204103 +0000 UTC m=+0.026713724) (total time: 30.000947075s):
	Trace[1427131847]: [30.000947075s] [30.000947075s] END
	E1210 00:26:01.758320       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1210 00:26:01.758343       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-10 00:25:31.757163262 +0000 UTC m=+0.026672881) (total time: 30.001013368s):
	Trace[939984059]: [30.001013368s] [30.001013368s] END
	E1210 00:26:01.758348       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1210 00:26:01.758362       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-10 00:25:31.757210753 +0000 UTC m=+0.026720375) (total time: 30.000940131s):
	Trace[2019727887]: [30.000940131s] [30.000940131s] END
	E1210 00:26:01.758367       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-280963
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-280963
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=old-k8s-version-280963
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_22_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:22:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-280963
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:31:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:26:29 +0000   Tue, 10 Dec 2024 00:22:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:26:29 +0000   Tue, 10 Dec 2024 00:22:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:26:29 +0000   Tue, 10 Dec 2024 00:22:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:26:29 +0000   Tue, 10 Dec 2024 00:23:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-280963
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 10eb53500e5b428e93adc8f6266666f7
	  System UUID:                b0688326-024d-4ebe-9ce3-d8f6ce47e462
	  Boot ID:                    7d4fb23d-f380-43ef-b743-f39d55af0439
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-45ksb                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m9s
	  kube-system                 etcd-old-k8s-version-280963                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m19s
	  kube-system                 kindnet-bx7xb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m10s
	  kube-system                 kube-apiserver-old-k8s-version-280963             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-controller-manager-old-k8s-version-280963    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-proxy-qb2z4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-scheduler-old-k8s-version-280963             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 metrics-server-9975d5f86-9wg6p                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         6m30s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-h78fg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-fnf8h               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m34s (x5 over 8m34s)  kubelet     Node old-k8s-version-280963 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s (x4 over 8m34s)  kubelet     Node old-k8s-version-280963 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s (x3 over 8m34s)  kubelet     Node old-k8s-version-280963 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m19s                  kubelet     Node old-k8s-version-280963 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s                  kubelet     Node old-k8s-version-280963 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m19s                  kubelet     Node old-k8s-version-280963 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m9s                   kubelet     Node old-k8s-version-280963 status is now: NodeReady
	  Normal  Starting                 8m8s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-280963 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-280963 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x7 over 5m59s)  kubelet     Node old-k8s-version-280963 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m52s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +1.028751] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000006] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +0.003986] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000006] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +2.011796] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000007] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +4.159610] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000007] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000001] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +8.191241] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000006] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  +0.003988] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000007] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-b7562d0c2da9
	[  +0.000005] ll header: 00000000: 02 42 9b 70 0c 4a 02 42 c0 a8 55 02 08 00
	
	
	==> etcd [de4e779f2f1e9dbb4a147473498f66677a264e01c0c74453fc0137f378cf8ae2] <==
	2024-12-10 00:27:26.017833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:27:36.017866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:27:46.017860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:27:56.017709 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:28:06.017832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:28:16.017797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:28:26.017798 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:28:36.017951 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:28:46.017850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:28:56.017887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:29:01.172004 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-9wg6p\" " with result "range_response_count:1 size:4324" took too long (107.20434ms) to execute
	2024-12-10 00:29:06.017925 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:29:16.017758 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:29:26.017867 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:29:36.017783 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:29:46.017751 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:29:56.017896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:30:06.017858 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:30:16.017877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:30:26.018136 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:30:36.017911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:30:46.017941 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:30:56.018043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:31:06.017899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-10 00:31:16.017829 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 00:31:23 up  3:13,  0 users,  load average: 0.73, 2.03, 2.29
	Linux old-k8s-version-280963 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1436cfab0a61117409dfbc149a9ed46cffc35222a59e469b4357eb8eeb006a1a] <==
	I1210 00:29:14.663089       1 main.go:301] handling current node
	I1210 00:29:24.663067       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:29:24.663097       1 main.go:301] handling current node
	I1210 00:29:34.656565       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:29:34.656607       1 main.go:301] handling current node
	I1210 00:29:44.662965       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:29:44.663003       1 main.go:301] handling current node
	I1210 00:29:54.662949       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:29:54.662993       1 main.go:301] handling current node
	I1210 00:30:04.657195       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:30:04.657235       1 main.go:301] handling current node
	I1210 00:30:14.661294       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:30:14.661345       1 main.go:301] handling current node
	I1210 00:30:24.662944       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:30:24.662985       1 main.go:301] handling current node
	I1210 00:30:34.655885       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:30:34.655928       1 main.go:301] handling current node
	I1210 00:30:44.662928       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:30:44.662965       1 main.go:301] handling current node
	I1210 00:30:54.658483       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:30:54.658865       1 main.go:301] handling current node
	I1210 00:31:04.657800       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:31:04.657856       1 main.go:301] handling current node
	I1210 00:31:14.659373       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 00:31:14.659517       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9be25993b65b8cdca34c64615c37d67ed96191f7e935d4aa5f3f20b8a71af72d] <==
	I1210 00:28:00.119565       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1210 00:28:00.119573       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1210 00:28:32.816941       1 client.go:360] parsed scheme: "passthrough"
	I1210 00:28:32.816988       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1210 00:28:32.816996       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1210 00:28:33.018965       1 handler_proxy.go:102] no RequestInfo found in the context
	E1210 00:28:33.019051       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1210 00:28:33.019068       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:29:16.713392       1 client.go:360] parsed scheme: "passthrough"
	I1210 00:29:16.713449       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1210 00:29:16.713456       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1210 00:29:49.955749       1 client.go:360] parsed scheme: "passthrough"
	I1210 00:29:49.955799       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1210 00:29:49.955807       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1210 00:30:28.129149       1 client.go:360] parsed scheme: "passthrough"
	I1210 00:30:28.129202       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1210 00:30:28.129210       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1210 00:30:30.457143       1 handler_proxy.go:102] no RequestInfo found in the context
	E1210 00:30:30.457219       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1210 00:30:30.457228       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:31:04.279735       1 client.go:360] parsed scheme: "passthrough"
	I1210 00:31:04.279798       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1210 00:31:04.279806       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [7f21b5ae0b202880d6305e4c384b15e18f96a0e7d3fdf3efa45355a2af113e82] <==
	W1210 00:26:55.062773       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1210 00:27:21.110798       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1210 00:27:26.713153       1 request.go:655] Throttling request took 1.048541144s, request: GET:https://192.168.85.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W1210 00:27:27.565100       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1210 00:27:51.612634       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1210 00:27:59.215523       1 request.go:655] Throttling request took 1.048363834s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W1210 00:28:00.066913       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1210 00:28:22.114461       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1210 00:28:31.717270       1 request.go:655] Throttling request took 1.048772487s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W1210 00:28:32.568585       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1210 00:28:52.616412       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1210 00:29:04.219028       1 request.go:655] Throttling request took 1.048700542s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta1?timeout=32s
	W1210 00:29:05.070245       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1210 00:29:23.118122       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1210 00:29:36.720672       1 request.go:655] Throttling request took 1.048711103s, request: GET:https://192.168.85.2:8443/apis/batch/v1beta1?timeout=32s
	W1210 00:29:37.571976       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1210 00:29:53.620074       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1210 00:30:09.222258       1 request.go:655] Throttling request took 1.048571463s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta2?timeout=32s
	W1210 00:30:10.073572       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1210 00:30:24.121839       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1210 00:30:41.723968       1 request.go:655] Throttling request took 1.048718327s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W1210 00:30:42.575045       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1210 00:30:54.623727       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1210 00:31:14.225478       1 request.go:655] Throttling request took 1.048679773s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W1210 00:31:15.076797       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [930a4290304a3700a3b34bb588be1a8cb0fc8fc88f3c3adb4bc89453498c1ba1] <==
	I1210 00:23:15.365671       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I1210 00:23:15.365784       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W1210 00:23:15.431756       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1210 00:23:15.431882       1 server_others.go:185] Using iptables Proxier.
	I1210 00:23:15.432498       1 server.go:650] Version: v1.20.0
	I1210 00:23:15.433615       1 config.go:315] Starting service config controller
	I1210 00:23:15.433634       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1210 00:23:15.433666       1 config.go:224] Starting endpoint slice config controller
	I1210 00:23:15.433670       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1210 00:23:15.533780       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1210 00:23:15.533867       1 shared_informer.go:247] Caches are synced for service config 
	I1210 00:25:31.771074       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I1210 00:25:31.771142       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W1210 00:25:31.840145       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1210 00:25:31.840257       1 server_others.go:185] Using iptables Proxier.
	I1210 00:25:31.840546       1 server.go:650] Version: v1.20.0
	I1210 00:25:31.841041       1 config.go:315] Starting service config controller
	I1210 00:25:31.841051       1 config.go:224] Starting endpoint slice config controller
	I1210 00:25:31.841080       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1210 00:25:31.841067       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1210 00:25:31.941265       1 shared_informer.go:247] Caches are synced for service config 
	I1210 00:25:31.941302       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [e24a785fdbd96316943085ea3d97c2bbf5698967fcac01b25a6a185f04e80b07] <==
	E1210 00:22:55.660566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:22:55.660664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 00:22:55.660749       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 00:22:55.661487       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 00:22:55.661532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:22:55.661592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:22:56.693626       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:22:56.801598       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 00:22:56.870447       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 00:22:56.954061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:22:56.960399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 00:22:56.989884       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 00:22:57.068922       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:22:57.087861       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1210 00:22:59.556341       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I1210 00:25:25.765542       1 serving.go:331] Generated self-signed cert in-memory
	W1210 00:25:29.438592       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 00:25:29.438636       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 00:25:29.438647       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 00:25:29.438657       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 00:25:29.535210       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1210 00:25:29.535846       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 00:25:29.535869       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 00:25:29.535896       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1210 00:25:29.637758       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Dec 10 00:29:50 old-k8s-version-280963 kubelet[1066]: E1210 00:29:50.063838    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:30:03 old-k8s-version-280963 kubelet[1066]: I1210 00:30:03.062916    1066 scope.go:95] [topologymanager] RemoveContainer - Container ID: bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1
	Dec 10 00:30:03 old-k8s-version-280963 kubelet[1066]: E1210 00:30:03.063224    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	Dec 10 00:30:05 old-k8s-version-280963 kubelet[1066]: E1210 00:30:05.063807    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:30:17 old-k8s-version-280963 kubelet[1066]: E1210 00:30:17.063774    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:30:18 old-k8s-version-280963 kubelet[1066]: I1210 00:30:18.063061    1066 scope.go:95] [topologymanager] RemoveContainer - Container ID: bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1
	Dec 10 00:30:18 old-k8s-version-280963 kubelet[1066]: E1210 00:30:18.063380    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	Dec 10 00:30:29 old-k8s-version-280963 kubelet[1066]: I1210 00:30:29.062967    1066 scope.go:95] [topologymanager] RemoveContainer - Container ID: bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1
	Dec 10 00:30:29 old-k8s-version-280963 kubelet[1066]: E1210 00:30:29.063392    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	Dec 10 00:30:32 old-k8s-version-280963 kubelet[1066]: E1210 00:30:32.063891    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: I1210 00:30:41.062887    1066 scope.go:95] [topologymanager] RemoveContainer - Container ID: bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1
	Dec 10 00:30:41 old-k8s-version-280963 kubelet[1066]: E1210 00:30:41.063206    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	Dec 10 00:30:44 old-k8s-version-280963 kubelet[1066]: E1210 00:30:44.064113    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: I1210 00:30:54.063223    1066 scope.go:95] [topologymanager] RemoveContainer - Container ID: bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1
	Dec 10 00:30:54 old-k8s-version-280963 kubelet[1066]: E1210 00:30:54.063639    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	Dec 10 00:30:56 old-k8s-version-280963 kubelet[1066]: E1210 00:30:56.063846    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:31:06 old-k8s-version-280963 kubelet[1066]: I1210 00:31:06.062932    1066 scope.go:95] [topologymanager] RemoveContainer - Container ID: bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1
	Dec 10 00:31:06 old-k8s-version-280963 kubelet[1066]: E1210 00:31:06.063272    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	Dec 10 00:31:09 old-k8s-version-280963 kubelet[1066]: E1210 00:31:09.063618    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 10 00:31:21 old-k8s-version-280963 kubelet[1066]: I1210 00:31:21.063636    1066 scope.go:95] [topologymanager] RemoveContainer - Container ID: bb6919a8e0b6952dc53dbfd18a32c5f406cbf4c685d039991f5c8a1dbb11e9d1
	Dec 10 00:31:21 old-k8s-version-280963 kubelet[1066]: E1210 00:31:21.064204    1066 pod_workers.go:191] Error syncing pod 276e856b-6a65-4d6b-af30-164aa8e39d64 ("dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-h78fg_kubernetes-dashboard(276e856b-6a65-4d6b-af30-164aa8e39d64)"
	Dec 10 00:31:21 old-k8s-version-280963 kubelet[1066]: E1210 00:31:21.094113    1066 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Dec 10 00:31:21 old-k8s-version-280963 kubelet[1066]: E1210 00:31:21.094203    1066 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Dec 10 00:31:21 old-k8s-version-280963 kubelet[1066]: E1210 00:31:21.094471    1066 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-8dbfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-9wg6p_kube-system(094fa3
45-5ab4-498e-8f36-9c97dd546a69): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Dec 10 00:31:21 old-k8s-version-280963 kubelet[1066]: E1210 00:31:21.094538    1066 pod_workers.go:191] Error syncing pod 094fa345-5ab4-498e-8f36-9c97dd546a69 ("metrics-server-9975d5f86-9wg6p_kube-system(094fa345-5ab4-498e-8f36-9c97dd546a69)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [71efdd7b3ff7301fc234121aa9b522c561e17518296c9ac0f096808fd717194e] <==
	2024/12/10 00:25:54 Starting overwatch
	2024/12/10 00:25:54 Using namespace: kubernetes-dashboard
	2024/12/10 00:25:54 Using in-cluster config to connect to apiserver
	2024/12/10 00:25:54 Using secret token for csrf signing
	2024/12/10 00:25:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/10 00:25:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/10 00:25:54 Successful initial request to the apiserver, version: v1.20.0
	2024/12/10 00:25:54 Generating JWE encryption key
	2024/12/10 00:25:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/10 00:25:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/10 00:25:54 Initializing JWE encryption key from synchronized object
	2024/12/10 00:25:54 Creating in-cluster Sidecar client
	2024/12/10 00:25:54 Serving insecurely on HTTP port: 9090
	2024/12/10 00:25:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:26:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:26:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:27:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:27:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:28:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:28:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:29:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:29:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:30:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/10 00:30:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5ef64915e71a0a82e907f051bd349d35990910b7707e2189239897f76b8fcf24] <==
	I1210 00:23:16.069372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 00:23:16.079884       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 00:23:16.081397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 00:23:16.093778       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 00:23:16.093935       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"de264d30-1f19-40ae-92c8-748638df9b78", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-280963_86df2f53-e0e5-48a9-9f77-0924d9396378 became leader
	I1210 00:23:16.094038       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280963_86df2f53-e0e5-48a9-9f77-0924d9396378!
	I1210 00:23:16.194994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280963_86df2f53-e0e5-48a9-9f77-0924d9396378!
	I1210 00:25:31.653234       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 00:26:01.667586       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b9cf656a0d778fa858636c57f7ed856932e9c797614d0b2a0bb2b7b183d0444e] <==
	I1210 00:26:02.456489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 00:26:02.493301       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 00:26:02.495367       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 00:26:19.911277       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 00:26:19.912061       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280963_96f0aa2a-3dc2-4166-9ffb-ab63cc136c72!
	I1210 00:26:19.912059       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"de264d30-1f19-40ae-92c8-748638df9b78", APIVersion:"v1", ResourceVersion:"819", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-280963_96f0aa2a-3dc2-4166-9ffb-ab63cc136c72 became leader
	I1210 00:26:20.013665       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280963_96f0aa2a-3dc2-4166-9ffb-ab63cc136c72!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280963 -n old-k8s-version-280963
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-280963 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-9wg6p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-280963 describe pod metrics-server-9975d5f86-9wg6p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-280963 describe pod metrics-server-9975d5f86-9wg6p: exit status 1 (64.377289ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-9wg6p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-280963 describe pod metrics-server-9975d5f86-9wg6p: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (379.68s)

                                                
                                    

Test pass (305/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.3
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.31.2/json-events 11.29
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.22
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.09
21 TestBinaryMirror 0.76
22 TestOffline 63.78
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 207.44
29 TestAddons/serial/Volcano 39.47
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.45
35 TestAddons/parallel/Registry 17.02
36 TestAddons/parallel/Ingress 20.64
37 TestAddons/parallel/InspektorGadget 10.9
38 TestAddons/parallel/MetricsServer 6.75
40 TestAddons/parallel/CSI 49.02
41 TestAddons/parallel/Headlamp 16.49
42 TestAddons/parallel/CloudSpanner 5.54
43 TestAddons/parallel/LocalPath 53.41
44 TestAddons/parallel/NvidiaDevicePlugin 5.49
45 TestAddons/parallel/Yakd 11.89
46 TestAddons/parallel/AmdGpuDevicePlugin 5.57
47 TestAddons/StoppedEnableDisable 12.17
48 TestCertOptions 30.95
49 TestCertExpiration 217.49
51 TestForceSystemdFlag 31.11
52 TestForceSystemdEnv 29.3
53 TestDockerEnvContainerd 39.71
54 TestKVMDriverInstallOrUpdate 4.56
58 TestErrorSpam/setup 24.73
59 TestErrorSpam/start 0.59
60 TestErrorSpam/status 0.89
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.71
63 TestErrorSpam/stop 1.41
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 57.89
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.4
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.98
75 TestFunctional/serial/CacheCmd/cache/add_local 1.91
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 38.32
84 TestFunctional/serial/ComponentHealth 0.08
85 TestFunctional/serial/LogsCmd 1.42
86 TestFunctional/serial/LogsFileCmd 1.45
87 TestFunctional/serial/InvalidService 4.26
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 7.87
91 TestFunctional/parallel/DryRun 0.38
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.95
97 TestFunctional/parallel/ServiceCmdConnect 11.5
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 30.48
101 TestFunctional/parallel/SSHCmd 0.6
102 TestFunctional/parallel/CpCmd 1.71
103 TestFunctional/parallel/MySQL 23.79
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 1.69
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
113 TestFunctional/parallel/License 0.56
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.8
119 TestFunctional/parallel/ImageCommands/Setup 1.83
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.21
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.21
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.16
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
140 TestFunctional/parallel/ProfileCmd/profile_list 0.39
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
142 TestFunctional/parallel/MountCmd/any-port 8.76
143 TestFunctional/parallel/ServiceCmd/List 0.52
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
146 TestFunctional/parallel/ServiceCmd/Format 0.45
147 TestFunctional/parallel/ServiceCmd/URL 0.35
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
151 TestFunctional/parallel/Version/short 0.06
152 TestFunctional/parallel/Version/components 0.54
153 TestFunctional/parallel/MountCmd/specific-port 1.95
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.9
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 95.07
162 TestMultiControlPlane/serial/DeployApp 6.84
163 TestMultiControlPlane/serial/PingHostFromPods 1.08
164 TestMultiControlPlane/serial/AddWorkerNode 21.38
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
167 TestMultiControlPlane/serial/CopyFile 16.25
168 TestMultiControlPlane/serial/StopSecondaryNode 12.55
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
170 TestMultiControlPlane/serial/RestartSecondaryNode 17.38
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 120.23
173 TestMultiControlPlane/serial/DeleteSecondaryNode 9.32
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
175 TestMultiControlPlane/serial/StopCluster 35.86
176 TestMultiControlPlane/serial/RestartCluster 67.02
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
178 TestMultiControlPlane/serial/AddSecondaryNode 38.23
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
183 TestJSONOutput/start/Command 41.9
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.69
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.62
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.8
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
208 TestKicCustomNetwork/create_custom_network 37.81
209 TestKicCustomNetwork/use_default_bridge_network 25.58
210 TestKicExistingNetwork 25.84
211 TestKicCustomSubnet 26.34
212 TestKicStaticIP 24.43
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 51.34
217 TestMountStart/serial/StartWithMountFirst 8.59
218 TestMountStart/serial/VerifyMountFirst 0.28
219 TestMountStart/serial/StartWithMountSecond 6.29
220 TestMountStart/serial/VerifyMountSecond 0.29
221 TestMountStart/serial/DeleteFirst 1.7
222 TestMountStart/serial/VerifyMountPostDelete 0.28
223 TestMountStart/serial/Stop 1.21
224 TestMountStart/serial/RestartStopped 7.62
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 67.69
229 TestMultiNode/serial/DeployApp2Nodes 20.14
230 TestMultiNode/serial/PingHostFrom2Pods 0.73
231 TestMultiNode/serial/AddNode 16.02
232 TestMultiNode/serial/MultiNodeLabels 0.08
233 TestMultiNode/serial/ProfileList 0.7
234 TestMultiNode/serial/CopyFile 10
235 TestMultiNode/serial/StopNode 2.14
236 TestMultiNode/serial/StartAfterStop 8.71
237 TestMultiNode/serial/RestartKeepsNodes 79.19
238 TestMultiNode/serial/DeleteNode 5.06
239 TestMultiNode/serial/StopMultiNode 23.93
240 TestMultiNode/serial/RestartMultiNode 55.25
241 TestMultiNode/serial/ValidateNameConflict 26.74
246 TestPreload 113.83
248 TestScheduledStopUnix 97.85
251 TestInsufficientStorage 12.66
252 TestRunningBinaryUpgrade 64.46
254 TestKubernetesUpgrade 332.56
255 TestMissingContainerUpgrade 171.95
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
258 TestStoppedBinaryUpgrade/Setup 2.41
259 TestNoKubernetes/serial/StartWithK8s 26.1
260 TestStoppedBinaryUpgrade/Upgrade 154.07
261 TestNoKubernetes/serial/StartWithStopK8s 11.14
262 TestNoKubernetes/serial/Start 10.21
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
264 TestNoKubernetes/serial/ProfileList 1.02
265 TestNoKubernetes/serial/Stop 1.18
266 TestNoKubernetes/serial/StartNoArgs 6.34
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
275 TestNetworkPlugins/group/false 3.98
287 TestPause/serial/Start 62.28
288 TestPause/serial/SecondStartNoReconfiguration 8.67
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
290 TestPause/serial/Pause 0.73
291 TestPause/serial/VerifyStatus 0.34
292 TestPause/serial/Unpause 0.69
293 TestPause/serial/PauseAgain 1.02
294 TestPause/serial/DeletePaused 8.31
295 TestPause/serial/VerifyDeletedResources 2.55
296 TestNetworkPlugins/group/auto/Start 60.52
297 TestNetworkPlugins/group/kindnet/Start 51.45
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
300 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
301 TestNetworkPlugins/group/auto/KubeletFlags 0.27
302 TestNetworkPlugins/group/auto/NetCatPod 9.19
303 TestNetworkPlugins/group/kindnet/DNS 0.16
304 TestNetworkPlugins/group/kindnet/Localhost 0.1
305 TestNetworkPlugins/group/kindnet/HairPin 0.1
306 TestNetworkPlugins/group/auto/DNS 0.13
307 TestNetworkPlugins/group/auto/Localhost 0.11
308 TestNetworkPlugins/group/auto/HairPin 0.1
309 TestNetworkPlugins/group/calico/Start 51.81
310 TestNetworkPlugins/group/custom-flannel/Start 42.11
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
313 TestNetworkPlugins/group/calico/ControllerPod 6.01
314 TestNetworkPlugins/group/custom-flannel/DNS 0.17
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
317 TestNetworkPlugins/group/calico/KubeletFlags 0.28
318 TestNetworkPlugins/group/calico/NetCatPod 9.19
319 TestNetworkPlugins/group/calico/DNS 0.13
320 TestNetworkPlugins/group/calico/Localhost 0.11
321 TestNetworkPlugins/group/calico/HairPin 0.12
322 TestNetworkPlugins/group/enable-default-cni/Start 36.84
323 TestNetworkPlugins/group/flannel/Start 43.64
324 TestNetworkPlugins/group/bridge/Start 71.03
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
327 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
328 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
329 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
332 TestNetworkPlugins/group/flannel/NetCatPod 9.25
334 TestStartStop/group/old-k8s-version/serial/FirstStart 140.41
335 TestNetworkPlugins/group/flannel/DNS 0.14
336 TestNetworkPlugins/group/flannel/Localhost 0.16
337 TestNetworkPlugins/group/flannel/HairPin 0.14
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
339 TestNetworkPlugins/group/bridge/NetCatPod 8.22
341 TestStartStop/group/embed-certs/serial/FirstStart 60.81
342 TestNetworkPlugins/group/bridge/DNS 0.2
343 TestNetworkPlugins/group/bridge/Localhost 0.14
344 TestNetworkPlugins/group/bridge/HairPin 0.17
346 TestStartStop/group/no-preload/serial/FirstStart 66.35
348 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.6
349 TestStartStop/group/embed-certs/serial/DeployApp 9.28
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
351 TestStartStop/group/embed-certs/serial/Stop 11.99
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
353 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
354 TestStartStop/group/embed-certs/serial/SecondStart 263.43
355 TestStartStop/group/no-preload/serial/DeployApp 10.29
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
359 TestStartStop/group/no-preload/serial/Stop 12.64
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 264.59
362 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
363 TestStartStop/group/no-preload/serial/SecondStart 263.92
364 TestStartStop/group/old-k8s-version/serial/DeployApp 8.44
365 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
366 TestStartStop/group/old-k8s-version/serial/Stop 12.28
367 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
369 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
371 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
372 TestStartStop/group/embed-certs/serial/Pause 2.82
374 TestStartStop/group/newest-cni/serial/FirstStart 31.24
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
379 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.82
380 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
382 TestStartStop/group/no-preload/serial/Pause 2.94
383 TestStartStop/group/newest-cni/serial/DeployApp 0
384 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
385 TestStartStop/group/newest-cni/serial/Stop 1.25
386 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
387 TestStartStop/group/newest-cni/serial/SecondStart 13.43
388 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
391 TestStartStop/group/newest-cni/serial/Pause 2.78
392 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
394 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
395 TestStartStop/group/old-k8s-version/serial/Pause 2.54
x
+
TestDownloadOnly/v1.20.0/json-events (16.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-510798 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-510798 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.294755889s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 23:43:52.020771  533916 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1209 23:43:52.020867  533916 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-510798
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-510798: exit status 85 (67.514484ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-510798 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |          |
	|         | -p download-only-510798        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:35.771785  533928 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:35.771922  533928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:35.771932  533928 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:35.771937  533928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:35.772122  533928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	W1209 23:43:35.772253  533928 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20062-527107/.minikube/config/config.json: open /home/jenkins/minikube-integration/20062-527107/.minikube/config/config.json: no such file or directory
	I1209 23:43:35.772865  533928 out.go:352] Setting JSON to true
	I1209 23:43:35.773867  533928 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8760,"bootTime":1733779056,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:35.773980  533928 start.go:139] virtualization: kvm guest
	I1209 23:43:35.776604  533928 out.go:97] [download-only-510798] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1209 23:43:35.776717  533928 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 23:43:35.776768  533928 notify.go:220] Checking for updates...
	I1209 23:43:35.778398  533928 out.go:169] MINIKUBE_LOCATION=20062
	I1209 23:43:35.780141  533928 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:35.782020  533928 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1209 23:43:35.783721  533928 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	I1209 23:43:35.785188  533928 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 23:43:35.788150  533928 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:43:35.788474  533928 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:35.811369  533928 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:43:35.811458  533928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:35.860922  533928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-09 23:43:35.851513998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:35.861041  533928 docker.go:318] overlay module found
	I1209 23:43:35.862927  533928 out.go:97] Using the docker driver based on user configuration
	I1209 23:43:35.862962  533928 start.go:297] selected driver: docker
	I1209 23:43:35.862969  533928 start.go:901] validating driver "docker" against <nil>
	I1209 23:43:35.863070  533928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:35.909722  533928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-09 23:43:35.901411549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:35.909896  533928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:35.910469  533928 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1209 23:43:35.910628  533928 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:43:35.912441  533928 out.go:169] Using Docker driver with root privileges
	I1209 23:43:35.914271  533928 cni.go:84] Creating CNI manager for ""
	I1209 23:43:35.914355  533928 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 23:43:35.914370  533928 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:35.914466  533928 start.go:340] cluster config:
	{Name:download-only-510798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-510798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:35.916253  533928 out.go:97] Starting "download-only-510798" primary control-plane node in "download-only-510798" cluster
	I1209 23:43:35.916283  533928 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 23:43:35.917941  533928 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:43:35.917977  533928 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 23:43:35.918151  533928 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:43:35.935926  533928 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:43:35.936154  533928 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:43:35.936271  533928 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:43:36.342582  533928 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I1209 23:43:36.342621  533928 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:36.342758  533928 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 23:43:36.344810  533928 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 23:43:36.344844  533928 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:43:36.440004  533928 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I1209 23:43:47.714707  533928 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:43:47.714792  533928 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:43:48.654053  533928 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1209 23:43:48.654461  533928 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/download-only-510798/config.json ...
	I1209 23:43:48.654498  533928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/download-only-510798/config.json: {Name:mke5b4eaa1354621fd76231ba5c08709146c2719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:43:48.654688  533928 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 23:43:48.654936  533928 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20062-527107/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-510798 host does not exist
	  To start a cluster, run: "minikube start -p download-only-510798"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-510798
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (11.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-124394 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-124394 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.285560183s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (11.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 23:44:03.738174  533916 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1209 23:44:03.738225  533916 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-124394
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-124394: exit status 85 (67.416682ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-510798 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | -p download-only-510798        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| delete  | -p download-only-510798        | download-only-510798 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| start   | -o=json --download-only        | download-only-124394 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | -p download-only-124394        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:52.499332  534297 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:52.499464  534297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:52.499474  534297 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:52.499479  534297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:52.499684  534297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1209 23:43:52.500319  534297 out.go:352] Setting JSON to true
	I1209 23:43:52.501268  534297 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8776,"bootTime":1733779056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:52.501390  534297 start.go:139] virtualization: kvm guest
	I1209 23:43:52.503639  534297 out.go:97] [download-only-124394] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:43:52.503812  534297 notify.go:220] Checking for updates...
	I1209 23:43:52.505356  534297 out.go:169] MINIKUBE_LOCATION=20062
	I1209 23:43:52.507083  534297 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:52.508561  534297 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1209 23:43:52.510068  534297 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	I1209 23:43:52.511902  534297 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 23:43:52.514892  534297 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:43:52.515158  534297 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:52.538429  534297 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:43:52.538520  534297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:52.587639  534297 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 23:43:52.577511637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:52.587838  534297 docker.go:318] overlay module found
	I1209 23:43:52.590406  534297 out.go:97] Using the docker driver based on user configuration
	I1209 23:43:52.590474  534297 start.go:297] selected driver: docker
	I1209 23:43:52.590484  534297 start.go:901] validating driver "docker" against <nil>
	I1209 23:43:52.590691  534297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:52.640760  534297 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 23:43:52.631363666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:52.641012  534297 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:52.641575  534297 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1209 23:43:52.641740  534297 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:43:52.643804  534297 out.go:169] Using Docker driver with root privileges
	I1209 23:43:52.645408  534297 cni.go:84] Creating CNI manager for ""
	I1209 23:43:52.645501  534297 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 23:43:52.645519  534297 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:52.645643  534297 start.go:340] cluster config:
	{Name:download-only-124394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-124394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:52.647330  534297 out.go:97] Starting "download-only-124394" primary control-plane node in "download-only-124394" cluster
	I1209 23:43:52.647363  534297 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 23:43:52.648862  534297 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:43:52.648894  534297 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 23:43:52.648975  534297 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:43:52.667128  534297 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:43:52.667293  534297 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:43:52.667314  534297 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 23:43:52.667320  534297 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 23:43:52.667331  534297 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 23:43:52.748030  534297 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4
	I1209 23:43:52.748072  534297 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:52.748285  534297 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 23:43:52.750392  534297 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 23:43:52.750437  534297 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:43:52.861871  534297 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:823d7cacd71c9363eaa034fc8738176b -> /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4
	I1209 23:44:02.127280  534297 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:44:02.127400  534297 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20062-527107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-124394 host does not exist
	  To start a cluster, run: "minikube start -p download-only-124394"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-124394
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-322090 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-322090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-322090
--- PASS: TestDownloadOnlyKic (1.09s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 23:44:05.535890  533916 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-326892 --alsologtostderr --binary-mirror http://127.0.0.1:45489 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-326892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-326892
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (63.78s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-177966 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-177966 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m0.02732463s)
helpers_test.go:175: Cleaning up "offline-containerd-177966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-177966
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-177966: (3.75443381s)
--- PASS: TestOffline (63.78s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-923727
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-923727: exit status 85 (63.047855ms)

                                                
                                                
-- stdout --
	* Profile "addons-923727" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-923727"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-923727
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-923727: exit status 85 (66.498383ms)

                                                
                                                
-- stdout --
	* Profile "addons-923727" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-923727"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (207.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-923727 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-923727 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m27.442668036s)
--- PASS: TestAddons/Setup (207.44s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.47s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 11.97889ms
addons_test.go:807: volcano-scheduler stabilized in 12.062896ms
addons_test.go:815: volcano-admission stabilized in 12.103364ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-bxz6d" [225313a1-bf8f-47c7-ad57-ae1c85677dfb] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003712604s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-j45h4" [77725401-28c6-4aba-ba55-0f4f55fe5580] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003406311s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-82h6v" [6ffc877d-313e-49c9-b055-ce3690928f53] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003472707s
addons_test.go:842: (dbg) Run:  kubectl --context addons-923727 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-923727 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-923727 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [43362bf8-d532-4168-9d4a-dc726b2af04e] Pending
helpers_test.go:344: "test-job-nginx-0" [43362bf8-d532-4168-9d4a-dc726b2af04e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [43362bf8-d532-4168-9d4a-dc726b2af04e] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003661659s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-923727 addons disable volcano --alsologtostderr -v=1: (11.152028035s)
--- PASS: TestAddons/serial/Volcano (39.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-923727 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-923727 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-923727 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-923727 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7de9669a-f397-410f-a4ca-2a979bcf8310] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7de9669a-f397-410f-a4ca-2a979bcf8310] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003366347s
addons_test.go:633: (dbg) Run:  kubectl --context addons-923727 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-923727 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-923727 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.607648ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-w4zsb" [6017b30b-c540-4ad1-b92c-d0d4f2b24f59] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003811784s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vdvdk" [ead4b210-0b51-4fa1-a550-3dbb754d404d] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003773819s
addons_test.go:331: (dbg) Run:  kubectl --context addons-923727 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-923727 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-923727 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.154574823s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 ip
2024/12/09 23:48:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.02s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-923727 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-923727 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-923727 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5c80e39f-2d6c-4f9d-a69f-41f6bd488f03] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5c80e39f-2d6c-4f9d-a69f-41f6bd488f03] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003689864s
I1209 23:49:12.055654  533916 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-923727 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-923727 addons disable ingress-dns --alsologtostderr -v=1: (1.174291287s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-923727 addons disable ingress --alsologtostderr -v=1: (8.068389028s)
--- PASS: TestAddons/parallel/Ingress (20.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-drvng" [cb128563-ad56-445c-b4fc-b97134a455d2] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005283476s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-923727 addons disable inspektor-gadget --alsologtostderr -v=1: (5.889827674s)
--- PASS: TestAddons/parallel/InspektorGadget (10.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.775477ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-6mbnf" [49a198c0-ec27-47fd-bdeb-2e35c7bb72df] Running
I1209 23:48:32.776762  533916 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 23:48:32.776794  533916 kapi.go:107] duration metric: took 26.267543ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003555359s
addons_test.go:402: (dbg) Run:  kubectl --context addons-923727 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 23:48:32.750544  533916 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 26.292717ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-923727 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-923727 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3d4bfca0-b1d0-44ee-a6dc-b5db65642000] Pending
helpers_test.go:344: "task-pv-pod" [3d4bfca0-b1d0-44ee-a6dc-b5db65642000] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3d4bfca0-b1d0-44ee-a6dc-b5db65642000] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004013723s
addons_test.go:511: (dbg) Run:  kubectl --context addons-923727 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-923727 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-923727 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-923727 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-923727 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-923727 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-923727 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a1d0a786-0f71-4a91-9fff-dd34ce9991c6] Pending
helpers_test.go:344: "task-pv-pod-restore" [a1d0a786-0f71-4a91-9fff-dd34ce9991c6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a1d0a786-0f71-4a91-9fff-dd34ce9991c6] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.007066998s
addons_test.go:553: (dbg) Run:  kubectl --context addons-923727 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-923727 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-923727 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-923727 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.206665654s)
--- PASS: TestAddons/parallel/CSI (49.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-923727 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-rz8zg" [d833e21a-33d1-470d-b165-392897dac7ee] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-rz8zg" [d833e21a-33d1-470d-b165-392897dac7ee] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004702535s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-923727 addons disable headlamp --alsologtostderr -v=1: (5.681222249s)
--- PASS: TestAddons/parallel/Headlamp (16.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-4v8dj" [e4af7edd-64fb-4cca-8ed0-2070a0f1c45c] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003534886s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-923727 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-923727 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923727 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [50eac722-f27c-4936-9139-b26cd0d34e19] Pending
helpers_test.go:344: "test-local-path" [50eac722-f27c-4936-9139-b26cd0d34e19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [50eac722-f27c-4936-9139-b26cd0d34e19] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [50eac722-f27c-4936-9139-b26cd0d34e19] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003644989s
addons_test.go:906: (dbg) Run:  kubectl --context addons-923727 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 ssh "cat /opt/local-path-provisioner/pvc-eb70b8b7-d23e-4744-8397-3bfd213c49d7_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-923727 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-923727 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-923727 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.507774606s)
--- PASS: TestAddons/parallel/LocalPath (53.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-p9r5l" [f80dbd42-d2d5-4347-ac14-03d2161452f5] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004515555s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fqlmr" [5a3c504b-a56a-48e0-b228-e986393e58ae] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003578618s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-923727 addons disable yakd --alsologtostderr -v=1: (5.889059282s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-9vmck" [d7390195-46bb-4859-9b5b-59bd86100b0e] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004044157s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-923727 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-923727
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-923727: (11.880478308s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-923727
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-923727
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-923727
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (30.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-930790 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1210 00:18:13.583774  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-930790 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (28.113442847s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-930790 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-930790 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-930790 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-930790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-930790
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-930790: (2.171712815s)
--- PASS: TestCertOptions (30.95s)

                                                
                                    
x
+
TestCertExpiration (217.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-230896 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-230896 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (28.949388121s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-230896 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-230896 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.394914294s)
helpers_test.go:175: Cleaning up "cert-expiration-230896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-230896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-230896: (2.147471491s)
--- PASS: TestCertExpiration (217.49s)

                                                
                                    
x
+
TestForceSystemdFlag (31.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-244212 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-244212 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (28.657650775s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-244212 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-244212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-244212
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-244212: (2.139946058s)
--- PASS: TestForceSystemdFlag (31.11s)

                                                
                                    
x
+
TestForceSystemdEnv (29.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-538322 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-538322 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.574677498s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-538322 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-538322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-538322
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-538322: (7.408336319s)
--- PASS: TestForceSystemdEnv (29.30s)

                                                
                                    
x
+
TestDockerEnvContainerd (39.71s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-916025 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-916025 --driver=docker  --container-runtime=containerd: (23.855666667s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-916025"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-VKhk38QJAMBG/agent.559502" SSH_AGENT_PID="559503" DOCKER_HOST=ssh://docker@127.0.0.1:33269 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-VKhk38QJAMBG/agent.559502" SSH_AGENT_PID="559503" DOCKER_HOST=ssh://docker@127.0.0.1:33269 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-VKhk38QJAMBG/agent.559502" SSH_AGENT_PID="559503" DOCKER_HOST=ssh://docker@127.0.0.1:33269 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.755195119s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-VKhk38QJAMBG/agent.559502" SSH_AGENT_PID="559503" DOCKER_HOST=ssh://docker@127.0.0.1:33269 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-916025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-916025
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-916025: (1.961980109s)
--- PASS: TestDockerEnvContainerd (39.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1210 00:16:27.314540  533916 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:16:27.314660  533916 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1210 00:16:27.343596  533916 install.go:62] docker-machine-driver-kvm2: exit status 1
W1210 00:16:27.343936  533916 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1210 00:16:27.343989  533916 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1401439103/001/docker-machine-driver-kvm2
I1210 00:16:27.583451  533916 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1401439103/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc00078b4b0 gz:0xc00078b4b8 tar:0xc00078b460 tar.bz2:0xc00078b470 tar.gz:0xc00078b480 tar.xz:0xc00078b490 tar.zst:0xc00078b4a0 tbz2:0xc00078b470 tgz:0xc00078b480 txz:0xc00078b490 tzst:0xc00078b4a0 xz:0xc00078b4c0 zip:0xc00078b4d0 zst:0xc00078b4c8] Getters:map[file:0xc0021703e0 http:0xc0007aa370 https:0xc0007aa3c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1210 00:16:27.583514  533916 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1401439103/001/docker-machine-driver-kvm2
I1210 00:16:30.034134  533916 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:16:30.034435  533916 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1210 00:16:30.077196  533916 install.go:137] /home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1210 00:16:30.077240  533916 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1210 00:16:30.077328  533916 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1210 00:16:30.077378  533916 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1401439103/002/docker-machine-driver-kvm2
I1210 00:16:30.130013  533916 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1401439103/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc00078b4b0 gz:0xc00078b4b8 tar:0xc00078b460 tar.bz2:0xc00078b470 tar.gz:0xc00078b480 tar.xz:0xc00078b490 tar.zst:0xc00078b4a0 tbz2:0xc00078b470 tgz:0xc00078b480 txz:0xc00078b490 tzst:0xc00078b4a0 xz:0xc00078b4c0 zip:0xc00078b4d0 zst:0xc00078b4c8] Getters:map[file:0xc002170a90 http:0xc0007ab720 https:0xc0007ab770] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1210 00:16:30.130060  533916 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1401439103/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                    
x
+
TestErrorSpam/setup (24.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-232294 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-232294 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-232294 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-232294 --driver=docker  --container-runtime=containerd: (24.731777723s)
--- PASS: TestErrorSpam/setup (24.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 stop: (1.199387959s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-232294 --log_dir /tmp/nospam-232294 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20062-527107/.minikube/files/etc/test/nested/copy/533916/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618530 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-618530 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (57.893255396s)
--- PASS: TestFunctional/serial/StartWithProxy (57.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 23:52:14.332395  533916 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618530 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-618530 --alsologtostderr -v=8: (5.394865576s)
functional_test.go:663: soft start took 5.395621731s for "functional-618530" cluster.
I1209 23:52:19.727653  533916 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (5.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-618530 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-618530 cache add registry.k8s.io/pause:3.3: (1.09860149s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-618530 /tmp/TestFunctionalserialCacheCmdcacheadd_local2766275966/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cache add minikube-local-cache-test:functional-618530
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-618530 cache add minikube-local-cache-test:functional-618530: (1.576379839s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cache delete minikube-local-cache-test:functional-618530
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-618530
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.913625ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 kubectl -- --context functional-618530 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-618530 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618530 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 23:52:33.814709  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:33.821116  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:33.832512  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:33.853977  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:33.895495  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:33.976996  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:34.138596  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:34.460409  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:35.102513  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:36.383931  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:38.946916  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:44.068363  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:54.310741  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-618530 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.322306141s)
functional_test.go:761: restart took 38.322444179s for "functional-618530" cluster.
I1209 23:53:05.404676  533916 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (38.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-618530 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-618530 logs: (1.418687392s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 logs --file /tmp/TestFunctionalserialLogsFileCmd1461436549/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-618530 logs --file /tmp/TestFunctionalserialLogsFileCmd1461436549/001/logs.txt: (1.444924413s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-618530 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-618530
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-618530: exit status 115 (337.990313ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32279 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-618530 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 config get cpus: exit status 14 (71.228531ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 config get cpus: exit status 14 (61.881327ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-618530 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-618530 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 579532: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618530 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-618530 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (166.335577ms)

                                                
                                                
-- stdout --
	* [functional-618530] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:53:27.127131  578736 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:53:27.127239  578736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:53:27.127250  578736 out.go:358] Setting ErrFile to fd 2...
	I1209 23:53:27.127256  578736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:53:27.127459  578736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1209 23:53:27.128088  578736 out.go:352] Setting JSON to false
	I1209 23:53:27.129196  578736 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9351,"bootTime":1733779056,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:53:27.129347  578736 start.go:139] virtualization: kvm guest
	I1209 23:53:27.131857  578736 out.go:177] * [functional-618530] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:53:27.133316  578736 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:53:27.133380  578736 notify.go:220] Checking for updates...
	I1209 23:53:27.136512  578736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:53:27.137737  578736 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1209 23:53:27.138984  578736 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	I1209 23:53:27.140297  578736 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:53:27.141773  578736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:53:27.143464  578736 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:53:27.143895  578736 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:53:27.167863  578736 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:53:27.168001  578736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:53:27.222492  578736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-09 23:53:27.212135405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:53:27.222600  578736 docker.go:318] overlay module found
	I1209 23:53:27.224774  578736 out.go:177] * Using the docker driver based on existing profile
	I1209 23:53:27.226169  578736 start.go:297] selected driver: docker
	I1209 23:53:27.226211  578736 start.go:901] validating driver "docker" against &{Name:functional-618530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-618530 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:53:27.226313  578736 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:53:27.228553  578736 out.go:201] 
	W1209 23:53:27.229880  578736 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 23:53:27.231168  578736 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618530 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618530 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-618530 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (159.01715ms)

                                                
                                                
-- stdout --
	* [functional-618530] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:53:26.964003  578638 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:53:26.964114  578638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:53:26.964119  578638 out.go:358] Setting ErrFile to fd 2...
	I1209 23:53:26.964123  578638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:53:26.964401  578638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1209 23:53:26.964959  578638 out.go:352] Setting JSON to false
	I1209 23:53:26.966068  578638 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9351,"bootTime":1733779056,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:53:26.966176  578638 start.go:139] virtualization: kvm guest
	I1209 23:53:26.968565  578638 out.go:177] * [functional-618530] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1209 23:53:26.970155  578638 notify.go:220] Checking for updates...
	I1209 23:53:26.970163  578638 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:53:26.971697  578638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:53:26.973006  578638 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1209 23:53:26.974280  578638 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	I1209 23:53:26.975544  578638 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:53:26.976765  578638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:53:26.978466  578638 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:53:26.979094  578638 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:53:27.002898  578638 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:53:27.003069  578638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:53:27.058539  578638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-09 23:53:27.049219126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:53:27.058696  578638 docker.go:318] overlay module found
	I1209 23:53:27.060679  578638 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1209 23:53:27.062020  578638 start.go:297] selected driver: docker
	I1209 23:53:27.062042  578638 start.go:901] validating driver "docker" against &{Name:functional-618530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-618530 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:53:27.062163  578638 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:53:27.064504  578638 out.go:201] 
	W1209 23:53:27.066335  578638 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 23:53:27.068106  578638 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-618530 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-618530 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9j89z" [6ae268e6-336e-4619-997b-3877a21d8493] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1209 23:53:14.792121  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9j89z" [6ae268e6-336e-4619-997b-3877a21d8493] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004188691s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32409
functional_test.go:1675: http://192.168.49.2:32409: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-9j89z

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32409
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [34bccdb9-9e38-44ad-b675-3ffcf244c39d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004464332s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-618530 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-618530 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-618530 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-618530 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f3ece94b-d49f-4557-869a-70cd1620fd59] Pending
helpers_test.go:344: "sp-pod" [f3ece94b-d49f-4557-869a-70cd1620fd59] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f3ece94b-d49f-4557-869a-70cd1620fd59] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003823286s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-618530 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-618530 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-618530 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [455f1320-8218-4609-a11e-c6662d68a699] Pending
helpers_test.go:344: "sp-pod" [455f1320-8218-4609-a11e-c6662d68a699] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [455f1320-8218-4609-a11e-c6662d68a699] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004711167s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-618530 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh -n functional-618530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cp functional-618530:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1514645659/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh -n functional-618530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh -n functional-618530 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-618530 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-z5l9l" [29557a2a-30b1-42b5-a074-2c079b8cf349] Pending
helpers_test.go:344: "mysql-6cdb49bbb-z5l9l" [29557a2a-30b1-42b5-a074-2c079b8cf349] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-z5l9l" [29557a2a-30b1-42b5-a074-2c079b8cf349] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004094431s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-618530 exec mysql-6cdb49bbb-z5l9l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-618530 exec mysql-6cdb49bbb-z5l9l -- mysql -ppassword -e "show databases;": exit status 1 (118.635417ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 23:53:52.161141  533916 retry.go:31] will retry after 1.248852112s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-618530 exec mysql-6cdb49bbb-z5l9l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-618530 exec mysql-6cdb49bbb-z5l9l -- mysql -ppassword -e "show databases;": exit status 1 (103.761934ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 23:53:53.514491  533916 retry.go:31] will retry after 1.987348812s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-618530 exec mysql-6cdb49bbb-z5l9l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/533916/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo cat /etc/test/nested/copy/533916/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/533916.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo cat /etc/ssl/certs/533916.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/533916.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo cat /usr/share/ca-certificates/533916.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5339162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo cat /etc/ssl/certs/5339162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5339162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo cat /usr/share/ca-certificates/5339162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-618530 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 ssh "sudo systemctl is-active docker": exit status 1 (288.495246ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 ssh "sudo systemctl is-active crio": exit status 1 (279.980489ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618530 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-618530
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kicbase/echo-server:functional-618530
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618530 image ls --format short --alsologtostderr:
I1209 23:53:35.367363  581616 out.go:345] Setting OutFile to fd 1 ...
I1209 23:53:35.367504  581616 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:35.367514  581616 out.go:358] Setting ErrFile to fd 2...
I1209 23:53:35.367521  581616 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:35.367755  581616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
I1209 23:53:35.368464  581616 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:35.368598  581616 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:35.369046  581616 cli_runner.go:164] Run: docker container inspect functional-618530 --format={{.State.Status}}
I1209 23:53:35.387226  581616 ssh_runner.go:195] Run: systemctl --version
I1209 23:53:35.387294  581616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618530
I1209 23:53:35.404716  581616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/functional-618530/id_rsa Username:docker}
I1209 23:53:35.499612  581616 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618530 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-618530  | sha256:dfd5b9 | 991B   |
| docker.io/library/nginx                     | alpine             | sha256:91ca84 | 22.8MB |
| docker.io/library/nginx                     | latest             | sha256:66f8bd | 72.1MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| docker.io/kicbase/echo-server               | functional-618530  | sha256:9056ab | 2.37MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:3a5bc2 | 38.6MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2            | sha256:0486b6 | 26.1MB |
| registry.k8s.io/kube-proxy                  | v1.31.2            | sha256:505d57 | 30.2MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/kube-apiserver              | v1.31.2            | sha256:9499c9 | 28MB   |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/kube-scheduler              | v1.31.2            | sha256:847c7b | 20.1MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:2e96e5 | 56.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618530 image ls --format table --alsologtostderr:
I1209 23:53:38.127874  582820 out.go:345] Setting OutFile to fd 1 ...
I1209 23:53:38.127986  582820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:38.127991  582820 out.go:358] Setting ErrFile to fd 2...
I1209 23:53:38.127995  582820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:38.128194  582820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
I1209 23:53:38.128882  582820 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:38.129039  582820 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:38.129642  582820 cli_runner.go:164] Run: docker container inspect functional-618530 --format={{.State.Status}}
I1209 23:53:38.148479  582820 ssh_runner.go:195] Run: systemctl --version
I1209 23:53:38.148547  582820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618530
I1209 23:53:38.166782  582820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/functional-618530/id_rsa Username:docker}
I1209 23:53:38.263207  582820 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618530 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22806346"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a53841
0","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"27972388"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-618530"]
,"size":"2372971"},{"id":"sha256:dfd5b991c67d454145113e0aa45bdb46b144546314eb6acc8f150347f3f8d68b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-618530"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"56909194"},{"id":"sha256:
66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"72099501"},{"id":"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"26147288"},{"id":"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"20102990"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c48
3e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"38600298"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"30225833"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618530 image ls --format json --alsologtostderr:
I1209 23:53:37.860229  582775 out.go:345] Setting OutFile to fd 1 ...
I1209 23:53:37.860394  582775 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:37.860406  582775 out.go:358] Setting ErrFile to fd 2...
I1209 23:53:37.860414  582775 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:37.860617  582775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
I1209 23:53:37.861265  582775 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:37.861397  582775 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:37.861834  582775 cli_runner.go:164] Run: docker container inspect functional-618530 --format={{.State.Status}}
I1209 23:53:37.883562  582775 ssh_runner.go:195] Run: systemctl --version
I1209 23:53:37.883633  582775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618530
I1209 23:53:37.906247  582775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/functional-618530/id_rsa Username:docker}
I1209 23:53:38.032382  582775 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618530 image ls --format yaml --alsologtostderr:
- id: sha256:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
repoTags:
- docker.io/library/nginx:alpine
size: "22806346"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "72099501"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-618530
size: "2372971"
- id: sha256:dfd5b991c67d454145113e0aa45bdb46b144546314eb6acc8f150347f3f8d68b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-618530
size: "991"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "26147288"
- id: sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "20102990"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "38600298"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "56909194"
- id: sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "27972388"
- id: sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "30225833"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618530 image ls --format yaml --alsologtostderr:
I1209 23:53:35.595638  581685 out.go:345] Setting OutFile to fd 1 ...
I1209 23:53:35.595911  581685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:35.595923  581685 out.go:358] Setting ErrFile to fd 2...
I1209 23:53:35.595929  581685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:35.596165  581685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
I1209 23:53:35.596815  581685 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:35.596943  581685 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:35.597335  581685 cli_runner.go:164] Run: docker container inspect functional-618530 --format={{.State.Status}}
I1209 23:53:35.616156  581685 ssh_runner.go:195] Run: systemctl --version
I1209 23:53:35.616228  581685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618530
I1209 23:53:35.639538  581685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/functional-618530/id_rsa Username:docker}
I1209 23:53:35.731948  581685 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 ssh pgrep buildkitd: exit status 1 (275.860233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image build -t localhost/my-image:functional-618530 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-618530 image build -t localhost/my-image:functional-618530 testdata/build --alsologtostderr: (4.206642092s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618530 image build -t localhost/my-image:functional-618530 testdata/build --alsologtostderr:
I1209 23:53:36.163902  582125 out.go:345] Setting OutFile to fd 1 ...
I1209 23:53:36.164291  582125 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:36.164324  582125 out.go:358] Setting ErrFile to fd 2...
I1209 23:53:36.164340  582125 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:53:36.164671  582125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
I1209 23:53:36.165628  582125 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:36.166537  582125 config.go:182] Loaded profile config "functional-618530": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:53:36.167264  582125 cli_runner.go:164] Run: docker container inspect functional-618530 --format={{.State.Status}}
I1209 23:53:36.195570  582125 ssh_runner.go:195] Run: systemctl --version
I1209 23:53:36.195635  582125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618530
I1209 23:53:36.215153  582125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/functional-618530/id_rsa Username:docker}
I1209 23:53:36.303542  582125 build_images.go:161] Building image from path: /tmp/build.236716236.tar
I1209 23:53:36.303640  582125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 23:53:36.328543  582125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.236716236.tar
I1209 23:53:36.332362  582125 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.236716236.tar: stat -c "%s %y" /var/lib/minikube/build/build.236716236.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.236716236.tar': No such file or directory
I1209 23:53:36.332397  582125 ssh_runner.go:362] scp /tmp/build.236716236.tar --> /var/lib/minikube/build/build.236716236.tar (3072 bytes)
I1209 23:53:36.357785  582125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.236716236
I1209 23:53:36.366923  582125 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.236716236 -xf /var/lib/minikube/build/build.236716236.tar
I1209 23:53:36.376330  582125 containerd.go:394] Building image: /var/lib/minikube/build/build.236716236
I1209 23:53:36.376439  582125 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.236716236 --local dockerfile=/var/lib/minikube/build/build.236716236 --output type=image,name=localhost/my-image:functional-618530
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:b423e2d9e7b39d6a6c85d71f80b7e21dde892de509c54b7c33317ce53c8309bf done
#8 exporting config sha256:4836d6f5a40424ecf695aecf5a5663f0a411d4c0a2d2c5c4d6778c730729d9f4 0.0s done
#8 naming to localhost/my-image:functional-618530 done
#8 DONE 0.2s
I1209 23:53:40.163396  582125 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.236716236 --local dockerfile=/var/lib/minikube/build/build.236716236 --output type=image,name=localhost/my-image:functional-618530: (3.786912986s)
I1209 23:53:40.163470  582125 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.236716236
I1209 23:53:40.234391  582125 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.236716236.tar
I1209 23:53:40.246664  582125 build_images.go:217] Built localhost/my-image:functional-618530 from /tmp/build.236716236.tar
I1209 23:53:40.246704  582125 build_images.go:133] succeeded building to: functional-618530
I1209 23:53:40.246710  582125 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.806413042s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-618530
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-618530 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-618530 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-618530 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-618530 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 575390: os: process already finished
helpers_test.go:502: unable to terminate pid 575044: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-618530 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-618530 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [56de70d3-fd2d-45f5-a570-7058ac6f97d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [56de70d3-fd2d-45f5-a570-7058ac6f97d0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003642459s
I1209 23:53:23.829540  533916 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image load --daemon kicbase/echo-server:functional-618530 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image load --daemon kicbase/echo-server:functional-618530 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-618530
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image load --daemon kicbase/echo-server:functional-618530 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-618530 image load --daemon kicbase/echo-server:functional-618530 --alsologtostderr: (1.166043144s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image save kicbase/echo-server:functional-618530 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image rm kicbase/echo-server:functional-618530 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-618530
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 image save --daemon kicbase/echo-server:functional-618530 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-618530
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-618530 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-618530 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-k6jzh" [c4e119c2-ac82-4ee5-984b-d47c01de03d5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-k6jzh" [c4e119c2-ac82-4ee5-984b-d47c01de03d5] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003936443s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-618530 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.108.31 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-618530 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "333.408266ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.018901ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "317.305076ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.733905ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdany-port1669107387/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733788405198215462" to /tmp/TestFunctionalparallelMountCmdany-port1669107387/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733788405198215462" to /tmp/TestFunctionalparallelMountCmdany-port1669107387/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733788405198215462" to /tmp/TestFunctionalparallelMountCmdany-port1669107387/001/test-1733788405198215462
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.989956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:53:25.448525  533916 retry.go:31] will retry after 483.75371ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 23:53 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 23:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 23:53 test-1733788405198215462
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh cat /mount-9p/test-1733788405198215462
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-618530 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f33f5de4-f31a-48d8-8adb-d03c57f3c7a9] Pending
helpers_test.go:344: "busybox-mount" [f33f5de4-f31a-48d8-8adb-d03c57f3c7a9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f33f5de4-f31a-48d8-8adb-d03c57f3c7a9] Running
helpers_test.go:344: "busybox-mount" [f33f5de4-f31a-48d8-8adb-d03c57f3c7a9] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f33f5de4-f31a-48d8-8adb-d03c57f3c7a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004407092s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-618530 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdany-port1669107387/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 service list -o json
functional_test.go:1494: Took "505.228759ms" to run "out/minikube-linux-amd64 -p functional-618530 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30338
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30338
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdspecific-port1734852269/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.747168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:53:34.254255  533916 retry.go:31] will retry after 667.470406ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T /mount-9p | grep 9p"
2024/12/09 23:53:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdspecific-port1734852269/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 ssh "sudo umount -f /mount-9p": exit status 1 (263.616188ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-618530 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdspecific-port1734852269/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2691937766/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2691937766/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2691937766/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T" /mount1: exit status 1 (347.217675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:53:36.256967  533916 retry.go:31] will retry after 663.01834ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618530 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-618530 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2691937766/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2691937766/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618530 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2691937766/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.90s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-618530
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-618530
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-618530
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (95.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-200552 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 23:55:17.676197  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-200552 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m34.365155721s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (95.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-200552 -- rollout status deployment/busybox: (4.895532074s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-49cmf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-629th -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-sdtsn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-49cmf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-629th -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-sdtsn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-49cmf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-629th -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-sdtsn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-49cmf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-49cmf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-629th -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-629th -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-sdtsn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-200552 -- exec busybox-7dff88458-sdtsn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-200552 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-200552 -v=7 --alsologtostderr: (20.51870942s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-200552 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp testdata/cp-test.txt ha-200552:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3064569969/001/cp-test_ha-200552.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552:/home/docker/cp-test.txt ha-200552-m02:/home/docker/cp-test_ha-200552_ha-200552-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m02 "sudo cat /home/docker/cp-test_ha-200552_ha-200552-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552:/home/docker/cp-test.txt ha-200552-m03:/home/docker/cp-test_ha-200552_ha-200552-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m03 "sudo cat /home/docker/cp-test_ha-200552_ha-200552-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552:/home/docker/cp-test.txt ha-200552-m04:/home/docker/cp-test_ha-200552_ha-200552-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m04 "sudo cat /home/docker/cp-test_ha-200552_ha-200552-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp testdata/cp-test.txt ha-200552-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3064569969/001/cp-test_ha-200552-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m02:/home/docker/cp-test.txt ha-200552:/home/docker/cp-test_ha-200552-m02_ha-200552.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552 "sudo cat /home/docker/cp-test_ha-200552-m02_ha-200552.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m02:/home/docker/cp-test.txt ha-200552-m03:/home/docker/cp-test_ha-200552-m02_ha-200552-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m03 "sudo cat /home/docker/cp-test_ha-200552-m02_ha-200552-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m02:/home/docker/cp-test.txt ha-200552-m04:/home/docker/cp-test_ha-200552-m02_ha-200552-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m04 "sudo cat /home/docker/cp-test_ha-200552-m02_ha-200552-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp testdata/cp-test.txt ha-200552-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3064569969/001/cp-test_ha-200552-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m03:/home/docker/cp-test.txt ha-200552:/home/docker/cp-test_ha-200552-m03_ha-200552.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552 "sudo cat /home/docker/cp-test_ha-200552-m03_ha-200552.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m03:/home/docker/cp-test.txt ha-200552-m02:/home/docker/cp-test_ha-200552-m03_ha-200552-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m02 "sudo cat /home/docker/cp-test_ha-200552-m03_ha-200552-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m03:/home/docker/cp-test.txt ha-200552-m04:/home/docker/cp-test_ha-200552-m03_ha-200552-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m04 "sudo cat /home/docker/cp-test_ha-200552-m03_ha-200552-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp testdata/cp-test.txt ha-200552-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3064569969/001/cp-test_ha-200552-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m04:/home/docker/cp-test.txt ha-200552:/home/docker/cp-test_ha-200552-m04_ha-200552.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552 "sudo cat /home/docker/cp-test_ha-200552-m04_ha-200552.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m04:/home/docker/cp-test.txt ha-200552-m02:/home/docker/cp-test_ha-200552-m04_ha-200552-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m02 "sudo cat /home/docker/cp-test_ha-200552-m04_ha-200552-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 cp ha-200552-m04:/home/docker/cp-test.txt ha-200552-m03:/home/docker/cp-test_ha-200552-m04_ha-200552-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 ssh -n ha-200552-m03 "sudo cat /home/docker/cp-test_ha-200552-m04_ha-200552-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-200552 node stop m02 -v=7 --alsologtostderr: (11.884241912s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr: exit status 7 (666.584257ms)

                                                
                                                
-- stdout --
	ha-200552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-200552-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-200552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-200552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:56:32.790027  604484 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:56:32.790159  604484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:56:32.790170  604484 out.go:358] Setting ErrFile to fd 2...
	I1209 23:56:32.790176  604484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:56:32.790385  604484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1209 23:56:32.790604  604484 out.go:352] Setting JSON to false
	I1209 23:56:32.790639  604484 mustload.go:65] Loading cluster: ha-200552
	I1209 23:56:32.790765  604484 notify.go:220] Checking for updates...
	I1209 23:56:32.791163  604484 config.go:182] Loaded profile config "ha-200552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:56:32.791189  604484 status.go:174] checking status of ha-200552 ...
	I1209 23:56:32.791700  604484 cli_runner.go:164] Run: docker container inspect ha-200552 --format={{.State.Status}}
	I1209 23:56:32.810680  604484 status.go:371] ha-200552 host status = "Running" (err=<nil>)
	I1209 23:56:32.810717  604484 host.go:66] Checking if "ha-200552" exists ...
	I1209 23:56:32.811160  604484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-200552
	I1209 23:56:32.829236  604484 host.go:66] Checking if "ha-200552" exists ...
	I1209 23:56:32.829579  604484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:56:32.829627  604484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-200552
	I1209 23:56:32.850436  604484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/ha-200552/id_rsa Username:docker}
	I1209 23:56:32.940098  604484 ssh_runner.go:195] Run: systemctl --version
	I1209 23:56:32.944319  604484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:56:32.955092  604484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:56:33.005226  604484 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:72 SystemTime:2024-12-09 23:56:32.99551314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:56:33.005839  604484 kubeconfig.go:125] found "ha-200552" server: "https://192.168.49.254:8443"
	I1209 23:56:33.005875  604484 api_server.go:166] Checking apiserver status ...
	I1209 23:56:33.005907  604484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:56:33.016712  604484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1597/cgroup
	I1209 23:56:33.025785  604484 api_server.go:182] apiserver freezer: "11:freezer:/docker/2b5e3374fc47ebc42e1e6f6a40b7a9aac56ecf2693b9b560999a73d55a175cf9/kubepods/burstable/pod542d59c955a5a6643a53f02da1fe2053/f70b83ef7d9cabd7328aa0aa9e419a089811a53864b5a5e961b1b0502cc1e8dc"
	I1209 23:56:33.025875  604484 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2b5e3374fc47ebc42e1e6f6a40b7a9aac56ecf2693b9b560999a73d55a175cf9/kubepods/burstable/pod542d59c955a5a6643a53f02da1fe2053/f70b83ef7d9cabd7328aa0aa9e419a089811a53864b5a5e961b1b0502cc1e8dc/freezer.state
	I1209 23:56:33.034633  604484 api_server.go:204] freezer state: "THAWED"
	I1209 23:56:33.034661  604484 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 23:56:33.038506  604484 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 23:56:33.038538  604484 status.go:463] ha-200552 apiserver status = Running (err=<nil>)
	I1209 23:56:33.038551  604484 status.go:176] ha-200552 status: &{Name:ha-200552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:56:33.038568  604484 status.go:174] checking status of ha-200552-m02 ...
	I1209 23:56:33.038829  604484 cli_runner.go:164] Run: docker container inspect ha-200552-m02 --format={{.State.Status}}
	I1209 23:56:33.057526  604484 status.go:371] ha-200552-m02 host status = "Stopped" (err=<nil>)
	I1209 23:56:33.057563  604484 status.go:384] host is not running, skipping remaining checks
	I1209 23:56:33.057573  604484 status.go:176] ha-200552-m02 status: &{Name:ha-200552-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:56:33.057602  604484 status.go:174] checking status of ha-200552-m03 ...
	I1209 23:56:33.057852  604484 cli_runner.go:164] Run: docker container inspect ha-200552-m03 --format={{.State.Status}}
	I1209 23:56:33.075902  604484 status.go:371] ha-200552-m03 host status = "Running" (err=<nil>)
	I1209 23:56:33.075932  604484 host.go:66] Checking if "ha-200552-m03" exists ...
	I1209 23:56:33.076219  604484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-200552-m03
	I1209 23:56:33.093707  604484 host.go:66] Checking if "ha-200552-m03" exists ...
	I1209 23:56:33.093982  604484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:56:33.094025  604484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-200552-m03
	I1209 23:56:33.111491  604484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33295 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/ha-200552-m03/id_rsa Username:docker}
	I1209 23:56:33.204176  604484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:56:33.215611  604484 kubeconfig.go:125] found "ha-200552" server: "https://192.168.49.254:8443"
	I1209 23:56:33.215645  604484 api_server.go:166] Checking apiserver status ...
	I1209 23:56:33.215677  604484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:56:33.226031  604484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup
	I1209 23:56:33.234933  604484 api_server.go:182] apiserver freezer: "11:freezer:/docker/468b9cfc06cbbdca4ff89e6955be98e25379a6a8731dad2ec6d6281d14b96727/kubepods/burstable/pod38a1d49b9b55f7297609dd950bd6294d/69919e96cc2b15f0eb61e660fd290debf44e1e1c2949f054a50b01e5689b8506"
	I1209 23:56:33.235021  604484 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/468b9cfc06cbbdca4ff89e6955be98e25379a6a8731dad2ec6d6281d14b96727/kubepods/burstable/pod38a1d49b9b55f7297609dd950bd6294d/69919e96cc2b15f0eb61e660fd290debf44e1e1c2949f054a50b01e5689b8506/freezer.state
	I1209 23:56:33.243016  604484 api_server.go:204] freezer state: "THAWED"
	I1209 23:56:33.243054  604484 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 23:56:33.247180  604484 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 23:56:33.247223  604484 status.go:463] ha-200552-m03 apiserver status = Running (err=<nil>)
	I1209 23:56:33.247233  604484 status.go:176] ha-200552-m03 status: &{Name:ha-200552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:56:33.247247  604484 status.go:174] checking status of ha-200552-m04 ...
	I1209 23:56:33.247509  604484 cli_runner.go:164] Run: docker container inspect ha-200552-m04 --format={{.State.Status}}
	I1209 23:56:33.265642  604484 status.go:371] ha-200552-m04 host status = "Running" (err=<nil>)
	I1209 23:56:33.265670  604484 host.go:66] Checking if "ha-200552-m04" exists ...
	I1209 23:56:33.266012  604484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-200552-m04
	I1209 23:56:33.283806  604484 host.go:66] Checking if "ha-200552-m04" exists ...
	I1209 23:56:33.284064  604484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:56:33.284107  604484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-200552-m04
	I1209 23:56:33.301342  604484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33300 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/ha-200552-m04/id_rsa Username:docker}
	I1209 23:56:33.395775  604484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:56:33.406285  604484 status.go:176] ha-200552-m04 status: &{Name:ha-200552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (17.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-200552 node start m02 -v=7 --alsologtostderr: (16.430186372s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (17.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-200552 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-200552 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-200552 -v=7 --alsologtostderr: (37.061752837s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-200552 --wait=true -v=7 --alsologtostderr
E1209 23:57:33.813011  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:01.518546  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:13.584126  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:13.590665  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:13.602257  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:13.623775  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:13.665356  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:13.746941  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:13.908918  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:14.230680  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:14.872866  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:16.155228  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:18.717392  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:23.839632  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:34.081787  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-200552 --wait=true -v=7 --alsologtostderr: (1m23.041819031s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-200552
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 node delete m03 -v=7 --alsologtostderr
E1209 23:58:54.564197  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-200552 node delete m03 -v=7 --alsologtostderr: (8.528821791s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 stop -v=7 --alsologtostderr
E1209 23:59:35.526651  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-200552 stop -v=7 --alsologtostderr: (35.743070392s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr: exit status 7 (120.157035ms)

                                                
                                                
-- stdout --
	ha-200552
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-200552-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-200552-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:59:38.473769  621301 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:59:38.474092  621301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:59:38.474104  621301 out.go:358] Setting ErrFile to fd 2...
	I1209 23:59:38.474109  621301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:59:38.474341  621301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1209 23:59:38.474534  621301 out.go:352] Setting JSON to false
	I1209 23:59:38.474569  621301 mustload.go:65] Loading cluster: ha-200552
	I1209 23:59:38.474649  621301 notify.go:220] Checking for updates...
	I1209 23:59:38.475085  621301 config.go:182] Loaded profile config "ha-200552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:59:38.475110  621301 status.go:174] checking status of ha-200552 ...
	I1209 23:59:38.475558  621301 cli_runner.go:164] Run: docker container inspect ha-200552 --format={{.State.Status}}
	I1209 23:59:38.496438  621301 status.go:371] ha-200552 host status = "Stopped" (err=<nil>)
	I1209 23:59:38.496460  621301 status.go:384] host is not running, skipping remaining checks
	I1209 23:59:38.496467  621301 status.go:176] ha-200552 status: &{Name:ha-200552 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:59:38.496517  621301 status.go:174] checking status of ha-200552-m02 ...
	I1209 23:59:38.496870  621301 cli_runner.go:164] Run: docker container inspect ha-200552-m02 --format={{.State.Status}}
	I1209 23:59:38.518962  621301 status.go:371] ha-200552-m02 host status = "Stopped" (err=<nil>)
	I1209 23:59:38.519012  621301 status.go:384] host is not running, skipping remaining checks
	I1209 23:59:38.519023  621301 status.go:176] ha-200552-m02 status: &{Name:ha-200552-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:59:38.519066  621301 status.go:174] checking status of ha-200552-m04 ...
	I1209 23:59:38.519473  621301 cli_runner.go:164] Run: docker container inspect ha-200552-m04 --format={{.State.Status}}
	I1209 23:59:38.537643  621301 status.go:371] ha-200552-m04 host status = "Stopped" (err=<nil>)
	I1209 23:59:38.537688  621301 status.go:384] host is not running, skipping remaining checks
	I1209 23:59:38.537703  621301 status.go:176] ha-200552-m04 status: &{Name:ha-200552-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-200552 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-200552 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.111493878s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-200552 --control-plane -v=7 --alsologtostderr
E1210 00:00:57.448121  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-200552 --control-plane -v=7 --alsologtostderr: (37.267993849s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-200552 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-333345 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-333345 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (41.90197792s)
--- PASS: TestJSONOutput/start/Command (41.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-333345 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-333345 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-333345 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-333345 --output=json --user=testUser: (5.796927245s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-645307 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-645307 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.996606ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7650957c-6e0a-4d7f-acfa-5bacac2dba55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-645307] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f5935cf-a978-4b7c-a7c7-0019c2ce2d17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"0a4dc304-93fd-4fc6-9739-008635f6bbcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9c0ae29d-d1f1-4fd1-b04f-81d2f463d6df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig"}}
	{"specversion":"1.0","id":"02866fcd-af98-45f1-834c-22dccf434871","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube"}}
	{"specversion":"1.0","id":"daccde2f-9bc4-470d-a10c-bedaf8690df2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8d4e268d-1833-4825-ba12-5d7066693504","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b48d8e9d-8e02-4338-8f0b-6d618eb98b7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-645307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-645307
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-274490 --network=
E1210 00:02:33.812708  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-274490 --network=: (35.743880669s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-274490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-274490
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-274490: (2.038845255s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.81s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-622459 --network=bridge
E1210 00:03:13.590934  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-622459 --network=bridge: (23.698691961s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-622459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-622459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-622459: (1.863007353s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.58s)

                                                
                                    
x
+
TestKicExistingNetwork (25.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 00:03:30.887657  533916 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 00:03:30.905195  533916 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 00:03:30.905270  533916 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 00:03:30.905289  533916 cli_runner.go:164] Run: docker network inspect existing-network
W1210 00:03:30.921885  533916 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 00:03:30.921921  533916 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 00:03:30.921935  533916 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 00:03:30.922070  533916 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 00:03:30.939744  533916 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0393b9c2cb53 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:2f:84:31:3b} reservation:<nil>}
I1210 00:03:30.940278  533916 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021f54d0}
I1210 00:03:30.940311  533916 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 00:03:30.940357  533916 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 00:03:31.006339  533916 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-367936 --network=existing-network
E1210 00:03:41.291048  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-367936 --network=existing-network: (23.825988138s)
helpers_test.go:175: Cleaning up "existing-network-367936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-367936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-367936: (1.859053802s)
I1210 00:03:56.709739  533916 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.84s)

                                                
                                    
x
+
TestKicCustomSubnet (26.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-574205 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-574205 --subnet=192.168.60.0/24: (24.109948234s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-574205 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-574205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-574205
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-574205: (2.208391248s)
--- PASS: TestKicCustomSubnet (26.34s)

                                                
                                    
x
+
TestKicStaticIP (24.43s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-867100 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-867100 --static-ip=192.168.200.200: (22.057329559s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-867100 ip
helpers_test.go:175: Cleaning up "static-ip-867100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-867100
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-867100: (2.207921257s)
--- PASS: TestKicStaticIP (24.43s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (51.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-051952 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-051952 --driver=docker  --container-runtime=containerd: (22.351468193s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-092279 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-092279 --driver=docker  --container-runtime=containerd: (24.02552456s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-051952
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-092279
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-092279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-092279
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-092279: (1.869385814s)
helpers_test.go:175: Cleaning up "first-051952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-051952
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-051952: (1.89740363s)
--- PASS: TestMinikubeProfile (51.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-172938 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-172938 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.593733529s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-172938 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-189211 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-189211 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.292832197s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-189211 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-172938 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-172938 --alsologtostderr -v=5: (1.701154333s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-189211 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-189211
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-189211: (1.214418172s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.62s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-189211
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-189211: (6.617036148s)
--- PASS: TestMountStart/serial/RestartStopped (7.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-189211 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786074 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-786074 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.168482326s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- rollout status deployment/busybox
E1210 00:07:33.812498  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-786074 -- rollout status deployment/busybox: (18.74822134s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-7kr6c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-976bg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-7kr6c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-976bg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-7kr6c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-976bg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-7kr6c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-7kr6c -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-976bg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786074 -- exec busybox-7dff88458-976bg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-786074 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-786074 -v 3 --alsologtostderr: (15.262452207s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-786074 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp testdata/cp-test.txt multinode-786074:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3507005361/001/cp-test_multinode-786074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074:/home/docker/cp-test.txt multinode-786074-m02:/home/docker/cp-test_multinode-786074_multinode-786074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m02 "sudo cat /home/docker/cp-test_multinode-786074_multinode-786074-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074:/home/docker/cp-test.txt multinode-786074-m03:/home/docker/cp-test_multinode-786074_multinode-786074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m03 "sudo cat /home/docker/cp-test_multinode-786074_multinode-786074-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp testdata/cp-test.txt multinode-786074-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3507005361/001/cp-test_multinode-786074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074-m02:/home/docker/cp-test.txt multinode-786074:/home/docker/cp-test_multinode-786074-m02_multinode-786074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074 "sudo cat /home/docker/cp-test_multinode-786074-m02_multinode-786074.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074-m02:/home/docker/cp-test.txt multinode-786074-m03:/home/docker/cp-test_multinode-786074-m02_multinode-786074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m03 "sudo cat /home/docker/cp-test_multinode-786074-m02_multinode-786074-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp testdata/cp-test.txt multinode-786074-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3507005361/001/cp-test_multinode-786074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074-m03:/home/docker/cp-test.txt multinode-786074:/home/docker/cp-test_multinode-786074-m03_multinode-786074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074 "sudo cat /home/docker/cp-test_multinode-786074-m03_multinode-786074.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 cp multinode-786074-m03:/home/docker/cp-test.txt multinode-786074-m02:/home/docker/cp-test_multinode-786074-m03_multinode-786074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 ssh -n multinode-786074-m02 "sudo cat /home/docker/cp-test_multinode-786074-m03_multinode-786074-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-786074 node stop m03: (1.187213729s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-786074 status: exit status 7 (474.163803ms)

                                                
                                                
-- stdout --
	multinode-786074
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-786074-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-786074-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-786074 status --alsologtostderr: exit status 7 (474.192072ms)

                                                
                                                
-- stdout --
	multinode-786074
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-786074-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-786074-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:08:04.323043  685738 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:08:04.323317  685738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:08:04.323327  685738 out.go:358] Setting ErrFile to fd 2...
	I1210 00:08:04.323332  685738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:08:04.323556  685738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1210 00:08:04.323776  685738 out.go:352] Setting JSON to false
	I1210 00:08:04.323815  685738 mustload.go:65] Loading cluster: multinode-786074
	I1210 00:08:04.323923  685738 notify.go:220] Checking for updates...
	I1210 00:08:04.324317  685738 config.go:182] Loaded profile config "multinode-786074": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:08:04.324340  685738 status.go:174] checking status of multinode-786074 ...
	I1210 00:08:04.324791  685738 cli_runner.go:164] Run: docker container inspect multinode-786074 --format={{.State.Status}}
	I1210 00:08:04.345698  685738 status.go:371] multinode-786074 host status = "Running" (err=<nil>)
	I1210 00:08:04.345741  685738 host.go:66] Checking if "multinode-786074" exists ...
	I1210 00:08:04.346057  685738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-786074
	I1210 00:08:04.364671  685738 host.go:66] Checking if "multinode-786074" exists ...
	I1210 00:08:04.364975  685738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:08:04.365017  685738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-786074
	I1210 00:08:04.382273  685738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/multinode-786074/id_rsa Username:docker}
	I1210 00:08:04.475892  685738 ssh_runner.go:195] Run: systemctl --version
	I1210 00:08:04.480052  685738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:04.491072  685738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:08:04.538624  685738 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-12-10 00:08:04.530025249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:08:04.539262  685738 kubeconfig.go:125] found "multinode-786074" server: "https://192.168.67.2:8443"
	I1210 00:08:04.539297  685738 api_server.go:166] Checking apiserver status ...
	I1210 00:08:04.539336  685738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:08:04.550180  685738 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	I1210 00:08:04.559076  685738 api_server.go:182] apiserver freezer: "11:freezer:/docker/5b9bf98f4cb90b7bd0077ddfbb5e5b4cbc038325b43d3c4f50a73f3d21b1cc83/kubepods/burstable/podd7001813dbed801a991c9931c98c9838/a4a06d91ea783869412524f539c9a000d9f16156a1562cfce673b69c3c53ac16"
	I1210 00:08:04.559145  685738 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5b9bf98f4cb90b7bd0077ddfbb5e5b4cbc038325b43d3c4f50a73f3d21b1cc83/kubepods/burstable/podd7001813dbed801a991c9931c98c9838/a4a06d91ea783869412524f539c9a000d9f16156a1562cfce673b69c3c53ac16/freezer.state
	I1210 00:08:04.567313  685738 api_server.go:204] freezer state: "THAWED"
	I1210 00:08:04.567350  685738 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 00:08:04.571033  685738 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 00:08:04.571062  685738 status.go:463] multinode-786074 apiserver status = Running (err=<nil>)
	I1210 00:08:04.571074  685738 status.go:176] multinode-786074 status: &{Name:multinode-786074 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:08:04.571094  685738 status.go:174] checking status of multinode-786074-m02 ...
	I1210 00:08:04.571429  685738 cli_runner.go:164] Run: docker container inspect multinode-786074-m02 --format={{.State.Status}}
	I1210 00:08:04.589199  685738 status.go:371] multinode-786074-m02 host status = "Running" (err=<nil>)
	I1210 00:08:04.589238  685738 host.go:66] Checking if "multinode-786074-m02" exists ...
	I1210 00:08:04.589649  685738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-786074-m02
	I1210 00:08:04.609129  685738 host.go:66] Checking if "multinode-786074-m02" exists ...
	I1210 00:08:04.609402  685738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:08:04.609436  685738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-786074-m02
	I1210 00:08:04.628252  685738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/20062-527107/.minikube/machines/multinode-786074-m02/id_rsa Username:docker}
	I1210 00:08:04.716258  685738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:04.727150  685738 status.go:176] multinode-786074-m02 status: &{Name:multinode-786074-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:08:04.727186  685738 status.go:174] checking status of multinode-786074-m03 ...
	I1210 00:08:04.727488  685738 cli_runner.go:164] Run: docker container inspect multinode-786074-m03 --format={{.State.Status}}
	I1210 00:08:04.745219  685738 status.go:371] multinode-786074-m03 host status = "Stopped" (err=<nil>)
	I1210 00:08:04.745241  685738 status.go:384] host is not running, skipping remaining checks
	I1210 00:08:04.745248  685738 status.go:176] multinode-786074-m03 status: &{Name:multinode-786074-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-786074 node start m03 -v=7 --alsologtostderr: (8.029407569s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-786074
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-786074
E1210 00:08:13.584108  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-786074: (24.850513209s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786074 --wait=true -v=8 --alsologtostderr
E1210 00:08:56.880360  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-786074 --wait=true -v=8 --alsologtostderr: (54.229103349s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-786074
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-786074 node delete m03: (4.487502713s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-786074 stop: (23.744949083s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-786074 status: exit status 7 (93.718777ms)

                                                
                                                
-- stdout --
	multinode-786074
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-786074-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-786074 status --alsologtostderr: exit status 7 (87.188422ms)

                                                
                                                
-- stdout --
	multinode-786074
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-786074-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:10:01.594137  695398 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:10:01.594247  695398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:10:01.594256  695398 out.go:358] Setting ErrFile to fd 2...
	I1210 00:10:01.594260  695398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:10:01.594461  695398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1210 00:10:01.594633  695398 out.go:352] Setting JSON to false
	I1210 00:10:01.594663  695398 mustload.go:65] Loading cluster: multinode-786074
	I1210 00:10:01.594778  695398 notify.go:220] Checking for updates...
	I1210 00:10:01.595106  695398 config.go:182] Loaded profile config "multinode-786074": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:10:01.595128  695398 status.go:174] checking status of multinode-786074 ...
	I1210 00:10:01.595533  695398 cli_runner.go:164] Run: docker container inspect multinode-786074 --format={{.State.Status}}
	I1210 00:10:01.613809  695398 status.go:371] multinode-786074 host status = "Stopped" (err=<nil>)
	I1210 00:10:01.613837  695398 status.go:384] host is not running, skipping remaining checks
	I1210 00:10:01.613845  695398 status.go:176] multinode-786074 status: &{Name:multinode-786074 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:10:01.613874  695398 status.go:174] checking status of multinode-786074-m02 ...
	I1210 00:10:01.614145  695398 cli_runner.go:164] Run: docker container inspect multinode-786074-m02 --format={{.State.Status}}
	I1210 00:10:01.632252  695398 status.go:371] multinode-786074-m02 host status = "Stopped" (err=<nil>)
	I1210 00:10:01.632300  695398 status.go:384] host is not running, skipping remaining checks
	I1210 00:10:01.632310  695398 status.go:176] multinode-786074-m02 status: &{Name:multinode-786074-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786074 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-786074 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.616485238s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786074 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-786074
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786074-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-786074-m02 --driver=docker  --container-runtime=containerd: exit status 14 (75.671772ms)

                                                
                                                
-- stdout --
	* [multinode-786074-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-786074-m02' is duplicated with machine name 'multinode-786074-m02' in profile 'multinode-786074'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786074-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-786074-m03 --driver=docker  --container-runtime=containerd: (24.417194664s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-786074
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-786074: exit status 80 (290.969641ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-786074 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-786074-m03 already exists in multinode-786074-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-786074-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-786074-m03: (1.903973603s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.74s)

                                                
                                    
x
+
TestPreload (113.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-209043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1210 00:12:33.812631  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-209043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.103562374s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-209043 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-209043 image pull gcr.io/k8s-minikube/busybox: (2.60301111s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-209043
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-209043: (11.909188959s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-209043 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1210 00:13:13.584358  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-209043 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (24.69566621s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-209043 image list
helpers_test.go:175: Cleaning up "test-preload-209043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-209043
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-209043: (2.27586909s)
--- PASS: TestPreload (113.83s)

                                                
                                    
x
+
TestScheduledStopUnix (97.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-282009 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-282009 --memory=2048 --driver=docker  --container-runtime=containerd: (21.379869268s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-282009 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-282009 -n scheduled-stop-282009
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-282009 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1210 00:13:43.238597  533916 retry.go:31] will retry after 63.287µs: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.239799  533916 retry.go:31] will retry after 222.956µs: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.240977  533916 retry.go:31] will retry after 211.725µs: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.242134  533916 retry.go:31] will retry after 400.068µs: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.243294  533916 retry.go:31] will retry after 467.897µs: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.244439  533916 retry.go:31] will retry after 611.703µs: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.245597  533916 retry.go:31] will retry after 1.190614ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.247843  533916 retry.go:31] will retry after 2.551752ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.251075  533916 retry.go:31] will retry after 2.881955ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.254277  533916 retry.go:31] will retry after 5.314804ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.260488  533916 retry.go:31] will retry after 3.360235ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.264722  533916 retry.go:31] will retry after 4.927557ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.269919  533916 retry.go:31] will retry after 19.011547ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.289193  533916 retry.go:31] will retry after 27.227987ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
I1210 00:13:43.317500  533916 retry.go:31] will retry after 41.388861ms: open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/scheduled-stop-282009/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-282009 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-282009 -n scheduled-stop-282009
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-282009
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-282009 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1210 00:14:36.652981  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-282009
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-282009: exit status 7 (73.210186ms)

                                                
                                                
-- stdout --
	scheduled-stop-282009
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-282009 -n scheduled-stop-282009
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-282009 -n scheduled-stop-282009: exit status 7 (74.013406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-282009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-282009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-282009: (5.090971769s)
--- PASS: TestScheduledStopUnix (97.85s)

                                                
                                    
x
+
TestInsufficientStorage (12.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-828709 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-828709 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.284508021s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b7fc7d02-c7f7-47be-a3f3-7ded3ad0f578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-828709] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"32204aab-f6be-41ee-bc54-0602633506e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"f14b4820-c2af-4f59-98cb-d4187c0f1f8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3565b00d-b076-4a7e-9a64-b72ec81a360e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig"}}
	{"specversion":"1.0","id":"49169cf6-4812-4e48-b68d-f77cc4c0b257","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube"}}
	{"specversion":"1.0","id":"008a94a2-6001-4852-b4b8-b4b5479e713e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1f2d4331-3a20-4aad-af32-9ccb810a3036","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9823e2a6-47a7-4ba7-a53c-a935f366f0d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5cde45c8-7c7f-4da7-96ca-3c6667535edc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0336b585-8593-4a07-8050-8f0b597c6952","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef29f9ab-ae91-4dd1-9fa8-63d46eb81169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"90373c4d-be13-49f3-a945-0db8e9ecb2b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-828709\" primary control-plane node in \"insufficient-storage-828709\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"205df566-feb5-4722-87a3-201c4a09fc9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"943ce1cf-b6a7-4da9-9bad-785a7f221f68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fad11d52-20c2-4edd-bb60-8a1717d9c28d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-828709 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-828709 --output=json --layout=cluster: exit status 7 (265.369861ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-828709","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-828709","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 00:15:09.831406  718176 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-828709" does not appear in /home/jenkins/minikube-integration/20062-527107/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-828709 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-828709 --output=json --layout=cluster: exit status 7 (267.77795ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-828709","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-828709","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 00:15:10.098987  718274 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-828709" does not appear in /home/jenkins/minikube-integration/20062-527107/kubeconfig
	E1210 00:15:10.109796  718274 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/insufficient-storage-828709/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-828709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-828709
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-828709: (1.843385763s)
--- PASS: TestInsufficientStorage (12.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2960942739 start -p running-upgrade-197391 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2960942739 start -p running-upgrade-197391 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (30.605947683s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-197391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-197391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.996518374s)
helpers_test.go:175: Cleaning up "running-upgrade-197391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-197391
E1210 00:17:33.812122  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/addons-923727/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-197391: (2.572680628s)
--- PASS: TestRunningBinaryUpgrade (64.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (332.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-473257 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-473257 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.869187349s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-473257
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-473257: (1.276080256s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-473257 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-473257 status --format={{.Host}}: exit status 7 (84.579179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-473257 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-473257 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m32.311476536s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-473257 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-473257 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-473257 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (84.617077ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-473257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-473257
	    minikube start -p kubernetes-upgrade-473257 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4732572 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-473257 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-473257 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-473257 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.469118558s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-473257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-473257
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-473257: (2.387995671s)
--- PASS: TestKubernetesUpgrade (332.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.95s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2136104216 start -p missing-upgrade-208236 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2136104216 start -p missing-upgrade-208236 --memory=2200 --driver=docker  --container-runtime=containerd: (1m47.559330322s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-208236
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-208236
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-208236 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-208236 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.017548241s)
helpers_test.go:175: Cleaning up "missing-upgrade-208236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-208236
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-208236: (2.360419254s)
--- PASS: TestMissingContainerUpgrade (171.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204693 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-204693 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (76.578101ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-204693] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204693 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204693 --driver=docker  --container-runtime=containerd: (25.79302376s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-204693 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (154.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2402137895 start -p stopped-upgrade-276780 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2402137895 start -p stopped-upgrade-276780 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m45.525436242s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2402137895 -p stopped-upgrade-276780 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2402137895 -p stopped-upgrade-276780 stop: (20.483898071s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-276780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-276780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.064738488s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (154.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204693 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204693 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.998500516s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-204693 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-204693 status -o json: exit status 2 (378.898537ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-204693","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-204693
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-204693: (2.765484723s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204693 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204693 --no-kubernetes --driver=docker  --container-runtime=containerd: (10.211502938s)
--- PASS: TestNoKubernetes/serial/Start (10.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-204693 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-204693 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.783017ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-204693
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-204693: (1.179621637s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204693 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204693 --driver=docker  --container-runtime=containerd: (6.33994577s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-204693 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-204693 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.824238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-085288 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-085288 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (169.376686ms)

                                                
                                                
-- stdout --
	* [false-085288] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:16:19.429796  733486 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:16:19.429937  733486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:16:19.429947  733486 out.go:358] Setting ErrFile to fd 2...
	I1210 00:16:19.429952  733486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:16:19.430135  733486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-527107/.minikube/bin
	I1210 00:16:19.430755  733486 out.go:352] Setting JSON to false
	I1210 00:16:19.431870  733486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10723,"bootTime":1733779056,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:16:19.431995  733486 start.go:139] virtualization: kvm guest
	I1210 00:16:19.434438  733486 out.go:177] * [false-085288] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:16:19.436433  733486 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:16:19.436483  733486 notify.go:220] Checking for updates...
	I1210 00:16:19.440121  733486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:16:19.441970  733486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-527107/kubeconfig
	I1210 00:16:19.443663  733486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-527107/.minikube
	I1210 00:16:19.445164  733486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:16:19.446766  733486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:16:19.449081  733486 config.go:182] Loaded profile config "force-systemd-env-538322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:16:19.449188  733486 config.go:182] Loaded profile config "missing-upgrade-208236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I1210 00:16:19.449267  733486 config.go:182] Loaded profile config "stopped-upgrade-276780": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I1210 00:16:19.449367  733486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:16:19.475785  733486 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1210 00:16:19.475917  733486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:16:19.533134  733486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:57 SystemTime:2024-12-10 00:16:19.52093788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:16:19.533249  733486 docker.go:318] overlay module found
	I1210 00:16:19.535629  733486 out.go:177] * Using the docker driver based on user configuration
	I1210 00:16:19.537243  733486 start.go:297] selected driver: docker
	I1210 00:16:19.537271  733486 start.go:901] validating driver "docker" against <nil>
	I1210 00:16:19.537289  733486 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:16:19.539805  733486 out.go:201] 
	W1210 00:16:19.541152  733486 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1210 00:16:19.542883  733486 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-085288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-085288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-085288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085288"

                                                
                                                
----------------------- debugLogs end: false-085288 [took: 3.621294537s] --------------------------------
helpers_test.go:175: Cleaning up "false-085288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-085288
--- PASS: TestNetworkPlugins/group/false (3.98s)

                                                
                                    
x
+
TestPause/serial/Start (62.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-177497 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-177497 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m2.278241314s)
--- PASS: TestPause/serial/Start (62.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-177497 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-177497 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.654737732s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-276780
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-276780: (1.221749662s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-177497 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-177497 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-177497 --output=json --layout=cluster: exit status 2 (339.051773ms)

                                                
                                                
-- stdout --
	{"Name":"pause-177497","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-177497","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-177497 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-177497 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-177497 --alsologtostderr -v=5: (1.024423262s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (8.31s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-177497 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-177497 --alsologtostderr -v=5: (8.310681623s)
--- PASS: TestPause/serial/DeletePaused (8.31s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.493582764s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-177497
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-177497: exit status 1 (17.457335ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-177497: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m0.518765079s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (51.448144812s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ljxtc" [86fa121e-640e-42bd-ac63-c09a2330d5c8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004034969s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-085288 "pgrep -a kubelet"
I1210 00:19:32.925027  533916 config.go:182] Loaded profile config "kindnet-085288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-085288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nv45k" [677da331-2aa2-44a7-b5fa-de1024581d4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nv45k" [677da331-2aa2-44a7-b5fa-de1024581d4a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004245769s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-085288 "pgrep -a kubelet"
I1210 00:19:35.806640  533916 config.go:182] Loaded profile config "auto-085288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-085288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t7pc2" [288f0661-5f70-44c9-b887-dd0d968a2bc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t7pc2" [288f0661-5f70-44c9-b887-dd0d968a2bc4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003542821s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-085288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-085288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (51.806864788s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (42.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (42.111706271s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (42.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-085288 "pgrep -a kubelet"
I1210 00:20:46.733860  533916 config.go:182] Loaded profile config "custom-flannel-085288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-085288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sbbsb" [1674331d-ff46-424d-950b-5b532e6f700d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sbbsb" [1674331d-ff46-424d-950b-5b532e6f700d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004476784s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7b822" [79477880-1bf5-4c36-92ff-5321d5e00514] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005135214s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-085288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-085288 "pgrep -a kubelet"
I1210 00:21:00.538197  533916 config.go:182] Loaded profile config "calico-085288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-085288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pzjvf" [29132d25-1629-41b5-b1a4-0786e5540a52] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pzjvf" [29132d25-1629-41b5-b1a4-0786e5540a52] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004763877s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-085288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (36.838380294s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (43.635954399s)
--- PASS: TestNetworkPlugins/group/flannel/Start (43.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-085288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m11.029910147s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-085288 "pgrep -a kubelet"
I1210 00:21:53.012573  533916 config.go:182] Loaded profile config "enable-default-cni-085288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-085288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fj74m" [a378413b-2568-4753-b145-71ccd55782b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fj74m" [a378413b-2568-4753-b145-71ccd55782b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003453412s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-085288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ttjm5" [3ceb68ce-d0a7-45b5-9418-e12b5a7ea880] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004443142s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-085288 "pgrep -a kubelet"
I1210 00:22:19.462867  533916 config.go:182] Loaded profile config "flannel-085288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-085288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-24hj7" [0a8f90bc-f0c3-4b84-9bb7-6b76a8c133fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-24hj7" [0a8f90bc-f0c3-4b84-9bb7-6b76a8c133fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.036100735s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-280963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-280963 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m20.406595338s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-085288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-085288 "pgrep -a kubelet"
I1210 00:22:42.679523  533916 config.go:182] Loaded profile config "bridge-085288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-085288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b557c" [dfa433fe-1211-47fd-8119-444173bae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b557c" [dfa433fe-1211-47fd-8119-444173bae3a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004556326s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-757313 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-757313 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m0.811380286s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-085288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-085288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-073501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-073501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m6.353395592s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-337138 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1210 00:23:13.584497  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/functional-618530/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-337138 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (56.604554647s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-757313 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cac3ee71-e26f-4e4f-96b6-a3b68dccc504] Pending
helpers_test.go:344: "busybox" [cac3ee71-e26f-4e4f-96b6-a3b68dccc504] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cac3ee71-e26f-4e4f-96b6-a3b68dccc504] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004324985s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-757313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-757313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-757313 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-757313 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-757313 --alsologtostderr -v=3: (11.990588747s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-337138 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [82529113-9a17-49df-8d3e-e29483fc87f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [82529113-9a17-49df-8d3e-e29483fc87f9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004607377s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-337138 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757313 -n embed-certs-757313
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757313 -n embed-certs-757313: exit status 7 (78.833538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-757313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-757313 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-757313 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m23.109001032s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757313 -n embed-certs-757313
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-073501 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e4124ddb-34c3-4180-98c6-08fa03642c7d] Pending
helpers_test.go:344: "busybox" [e4124ddb-34c3-4180-98c6-08fa03642c7d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e4124ddb-34c3-4180-98c6-08fa03642c7d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004803117s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-073501 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-337138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-337138 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-337138 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-337138 --alsologtostderr -v=3: (12.076550013s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-073501 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1210 00:24:26.641608  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:26.648084  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:26.659465  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:26.680840  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:26.722258  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-073501 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.081377157s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-073501 describe deploy/metrics-server -n kube-system
E1210 00:24:26.803477  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-073501 --alsologtostderr -v=3
E1210 00:24:26.965787  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:27.287239  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:27.928712  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:29.210342  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-073501 --alsologtostderr -v=3: (12.637369745s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138: exit status 7 (108.456533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-337138 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (264.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-337138 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1210 00:24:31.771949  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:35.990042  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:35.996965  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:36.008482  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:36.030921  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:36.072625  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:36.154643  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:36.316854  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:36.639082  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:36.893516  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:37.281077  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:38.562420  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-337138 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m24.194315761s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (264.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-073501 -n no-preload-073501
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-073501 -n no-preload-073501: exit status 7 (99.220307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-073501 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-073501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1210 00:24:41.123791  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-073501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m23.613014281s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-073501 -n no-preload-073501
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-280963 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [286b7342-fe82-48c2-a1c4-8a0e7782f71f] Pending
helpers_test.go:344: "busybox" [286b7342-fe82-48c2-a1c4-8a0e7782f71f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [286b7342-fe82-48c2-a1c4-8a0e7782f71f] Running
E1210 00:24:46.246220  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:47.135119  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005153472s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-280963 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-280963 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-280963 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.087214832s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-280963 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-280963 --alsologtostderr -v=3
E1210 00:24:56.488174  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-280963 --alsologtostderr -v=3: (12.283746206s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280963 -n old-k8s-version-280963
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280963 -n old-k8s-version-280963: exit status 7 (82.407703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-280963 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lw8qk" [8bbe09e3-5562-45d7-aafc-9083cfbe9add] Running
E1210 00:28:38.114304  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/calico-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004068536s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lw8qk" [8bbe09e3-5562-45d7-aafc-9083cfbe9add] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004880618s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-757313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-757313 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-757313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-757313 -n embed-certs-757313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-757313 -n embed-certs-757313: exit status 2 (301.692668ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-757313 -n embed-certs-757313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-757313 -n embed-certs-757313: exit status 2 (309.128623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-757313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-757313 -n embed-certs-757313
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-757313 -n embed-certs-757313
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-451721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-451721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (31.241106059s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dmblq" [f6f827b4-57cc-48de-ba43-34af703b7983] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004250737s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dmblq" [f6f827b4-57cc-48de-ba43-34af703b7983] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003658277s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-337138 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rtggj" [6e5ffcc5-9bf6-4484-90fb-8f275400cbcf] Running
E1210 00:29:04.832114  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/bridge-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004112119s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-337138 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-337138 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138: exit status 2 (319.916488ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138: exit status 2 (311.805463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-337138 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-337138 -n default-k8s-diff-port-337138
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rtggj" [6e5ffcc5-9bf6-4484-90fb-8f275400cbcf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004067265s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-073501 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-073501 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-073501 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-073501 -n no-preload-073501
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-073501 -n no-preload-073501: exit status 2 (338.402901ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-073501 -n no-preload-073501
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-073501 -n no-preload-073501: exit status 2 (338.022976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-073501 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-073501 -n no-preload-073501
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-073501 -n no-preload-073501
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-451721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-451721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022905126s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-451721 --alsologtostderr -v=3
E1210 00:29:26.643034  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/kindnet-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-451721 --alsologtostderr -v=3: (1.246500712s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-451721 -n newest-cni-451721
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-451721 -n newest-cni-451721: exit status 7 (72.652676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-451721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-451721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1210 00:29:35.989938  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/auto-085288/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:29:37.056399  533916 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-527107/.minikube/profiles/enable-default-cni-085288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-451721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (13.118462568s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-451721 -n newest-cni-451721
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-451721 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-451721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-451721 -n newest-cni-451721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-451721 -n newest-cni-451721: exit status 2 (293.435765ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-451721 -n newest-cni-451721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-451721 -n newest-cni-451721: exit status 2 (295.450022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-451721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-451721 -n newest-cni-451721
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-451721 -n newest-cni-451721
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fnf8h" [e10d30e1-eccd-4f68-98a8-c51aba9e4b9a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004425242s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fnf8h" [e10d30e1-eccd-4f68-98a8-c51aba9e4b9a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003729778s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-280963 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-280963 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-280963 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280963 -n old-k8s-version-280963
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280963 -n old-k8s-version-280963: exit status 2 (294.404258ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-280963 -n old-k8s-version-280963
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-280963 -n old-k8s-version-280963: exit status 2 (293.976126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-280963 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280963 -n old-k8s-version-280963
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-280963 -n old-k8s-version-280963
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                    

Test skip (24/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-085288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-085288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-085288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085288"

                                                
                                                
----------------------- debugLogs end: kubenet-085288 [took: 3.46488331s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-085288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-085288
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-085288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-085288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-085288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-085288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085288"

                                                
                                                
----------------------- debugLogs end: cilium-085288 [took: 3.805653467s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-085288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-085288
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-842274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-842274
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard