Test Report: KVM_Linux 17585

                    
                      ea770f64c27c5646b2ec1dfcd286218478f671de:2023-11-07:31788
                    
                

Test fail (2/321)

Order failed test Duration
107 TestFunctional/parallel/License 0.15
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.98
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (154.333776ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-729146 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-729146 "sudo crictl images -o json": exit status 1 (269.867679ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-729146 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-729146 -n old-k8s-version-729146
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-729146 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-729146 logs -n 25: (1.5817562s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-627949 sudo                                 | kubenet-627949               | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p kubenet-627949 sudo                                 | kubenet-627949               | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC |                     |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p kubenet-627949 sudo                                 | kubenet-627949               | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p kubenet-627949 sudo find                            | kubenet-627949               | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p kubenet-627949 sudo crio                            | kubenet-627949               | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p kubenet-627949                                      | kubenet-627949               | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-703291 | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:49 UTC |
	|         | disable-driver-mounts-703291                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385734 | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:50 UTC |
	|         | default-k8s-diff-port-385734                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-883054             | no-preload-883054            | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-883054                                   | no-preload-883054            | jenkins | v1.32.0 | 07 Nov 23 23:49 UTC | 07 Nov 23 23:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-729146        | old-k8s-version-729146       | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-729146                              | old-k8s-version-729146       | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-883054                  | no-preload-883054            | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-883054                                   | no-preload-883054            | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-729146             | old-k8s-version-729146       | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-729146                              | old-k8s-version-729146       | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-692502            | embed-certs-692502           | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-692502                                  | embed-certs-692502           | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-692502                 | embed-certs-692502           | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-692502                                  | embed-certs-692502           | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385734  | default-k8s-diff-port-385734 | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385734 | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | default-k8s-diff-port-385734                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385734       | default-k8s-diff-port-385734 | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385734 | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC |                     |
	|         | default-k8s-diff-port-385734                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| ssh     | -p old-k8s-version-729146 sudo                         | old-k8s-version-729146       | jenkins | v1.32.0 | 07 Nov 23 23:52 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:50:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:50:55.353768   61980 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:50:55.353960   61980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:50:55.353983   61980 out.go:309] Setting ErrFile to fd 2...
	I1107 23:50:55.354000   61980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:50:55.354189   61980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
	I1107 23:50:55.354740   61980 out.go:303] Setting JSON to false
	I1107 23:50:55.355745   61980 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5608,"bootTime":1699395447,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:50:55.355832   61980 start.go:138] virtualization: kvm guest
	I1107 23:50:55.358444   61980 out.go:177] * [default-k8s-diff-port-385734] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:50:55.360254   61980 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:50:55.361757   61980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:50:55.360362   61980 notify.go:220] Checking for updates...
	I1107 23:50:55.364373   61980 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:50:55.365918   61980 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	I1107 23:50:55.367461   61980 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:50:55.368879   61980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:50:55.370946   61980 config.go:182] Loaded profile config "default-k8s-diff-port-385734": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:50:55.371524   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:50:55.371579   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:50:55.394942   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I1107 23:50:55.395433   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:50:55.396101   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:50:55.396153   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:50:55.396681   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:50:55.396900   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:50:55.397161   61980 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:50:55.397611   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:50:55.397677   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:50:55.412797   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1107 23:50:55.413253   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:50:55.413885   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:50:55.413933   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:50:55.414408   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:50:55.414595   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:50:55.458411   61980 out.go:177] * Using the kvm2 driver based on existing profile
	I1107 23:50:55.459933   61980 start.go:298] selected driver: kvm2
	I1107 23:50:55.459962   61980 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-385734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:def
ault-k8s-diff-port-385734 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.88 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:50:55.460080   61980 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:50:55.460832   61980 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:50:55.460925   61980 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:50:55.480476   61980 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:50:55.480915   61980 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:50:55.480954   61980 cni.go:84] Creating CNI manager for ""
	I1107 23:50:55.480972   61980 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 23:50:55.480994   61980 start_flags.go:323] config:
	{Name:default-k8s-diff-port-385734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-385734 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.88 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:50:55.481175   61980 iso.go:125] acquiring lock: {Name:mk6a728cebb26babf756ae6ad70b6747ae55e33b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:50:55.484851   61980 out.go:177] * Starting control plane node default-k8s-diff-port-385734 in cluster default-k8s-diff-port-385734
	I1107 23:50:52.875609   61089 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:50:52.874142   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 23:50:52.875622   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:50:52.875638   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHHostname
	I1107 23:50:52.875646   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHHostname
	I1107 23:50:52.874346   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHKeyPath
	I1107 23:50:52.875881   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHUsername
	I1107 23:50:52.876045   61089 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/no-preload-883054/id_rsa Username:docker}
	I1107 23:50:52.879668   61089 main.go:141] libmachine: (no-preload-883054) DBG | domain no-preload-883054 has defined MAC address 52:54:00:4c:fa:a9 in network mk-no-preload-883054
	I1107 23:50:52.879895   61089 main.go:141] libmachine: (no-preload-883054) DBG | domain no-preload-883054 has defined MAC address 52:54:00:4c:fa:a9 in network mk-no-preload-883054
	I1107 23:50:52.880152   61089 main.go:141] libmachine: (no-preload-883054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:fa:a9", ip: ""} in network mk-no-preload-883054: {Iface:virbr2 ExpiryTime:2023-11-08 00:50:19 +0000 UTC Type:0 Mac:52:54:00:4c:fa:a9 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:no-preload-883054 Clientid:01:52:54:00:4c:fa:a9}
	I1107 23:50:52.880187   61089 main.go:141] libmachine: (no-preload-883054) DBG | domain no-preload-883054 has defined IP address 192.168.50.211 and MAC address 52:54:00:4c:fa:a9 in network mk-no-preload-883054
	I1107 23:50:52.880224   61089 main.go:141] libmachine: (no-preload-883054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:fa:a9", ip: ""} in network mk-no-preload-883054: {Iface:virbr2 ExpiryTime:2023-11-08 00:50:19 +0000 UTC Type:0 Mac:52:54:00:4c:fa:a9 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:no-preload-883054 Clientid:01:52:54:00:4c:fa:a9}
	I1107 23:50:52.880238   61089 main.go:141] libmachine: (no-preload-883054) DBG | domain no-preload-883054 has defined IP address 192.168.50.211 and MAC address 52:54:00:4c:fa:a9 in network mk-no-preload-883054
	I1107 23:50:52.880486   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHPort
	I1107 23:50:52.880544   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHPort
	I1107 23:50:52.880758   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHKeyPath
	I1107 23:50:52.880972   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHUsername
	I1107 23:50:52.881151   61089 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/no-preload-883054/id_rsa Username:docker}
	I1107 23:50:52.881930   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHKeyPath
	I1107 23:50:52.882106   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHUsername
	I1107 23:50:52.882660   61089 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/no-preload-883054/id_rsa Username:docker}
	I1107 23:50:52.888598   61089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I1107 23:50:52.888996   61089 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:50:52.889666   61089 main.go:141] libmachine: Using API Version  1
	I1107 23:50:52.889685   61089 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:50:52.890077   61089 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:50:52.890674   61089 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:50:52.890713   61089 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:50:52.914938   61089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41445
	I1107 23:50:52.915391   61089 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:50:52.915981   61089 main.go:141] libmachine: Using API Version  1
	I1107 23:50:52.916008   61089 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:50:52.916354   61089 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:50:52.916603   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetState
	I1107 23:50:52.918425   61089 main.go:141] libmachine: (no-preload-883054) Calling .DriverName
	I1107 23:50:52.920079   61089 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:50:52.920094   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:50:52.920112   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHHostname
	I1107 23:50:52.923048   61089 main.go:141] libmachine: (no-preload-883054) DBG | domain no-preload-883054 has defined MAC address 52:54:00:4c:fa:a9 in network mk-no-preload-883054
	I1107 23:50:52.923568   61089 main.go:141] libmachine: (no-preload-883054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:fa:a9", ip: ""} in network mk-no-preload-883054: {Iface:virbr2 ExpiryTime:2023-11-08 00:50:19 +0000 UTC Type:0 Mac:52:54:00:4c:fa:a9 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:no-preload-883054 Clientid:01:52:54:00:4c:fa:a9}
	I1107 23:50:52.923591   61089 main.go:141] libmachine: (no-preload-883054) DBG | domain no-preload-883054 has defined IP address 192.168.50.211 and MAC address 52:54:00:4c:fa:a9 in network mk-no-preload-883054
	I1107 23:50:52.923627   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHPort
	I1107 23:50:52.923831   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHKeyPath
	I1107 23:50:52.924423   61089 main.go:141] libmachine: (no-preload-883054) Calling .GetSSHUsername
	I1107 23:50:52.924651   61089 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/no-preload-883054/id_rsa Username:docker}
	I1107 23:50:53.058691   61089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:50:53.090913   61089 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 23:50:53.090944   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1107 23:50:53.120173   61089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:50:53.135376   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1107 23:50:53.135410   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1107 23:50:53.183077   61089 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 23:50:53.183106   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 23:50:53.220457   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1107 23:50:53.220481   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1107 23:50:53.263640   61089 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:50:53.263668   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 23:50:53.310329   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1107 23:50:53.310360   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1107 23:50:53.347228   61089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:50:53.353344   61089 node_ready.go:35] waiting up to 6m0s for node "no-preload-883054" to be "Ready" ...
	I1107 23:50:53.353370   61089 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 23:50:53.353474   61089 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 23:50:53.353526   61089 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:50:53.353535   61089 cache_images.go:262] succeeded pushing to: no-preload-883054
	I1107 23:50:53.353540   61089 cache_images.go:263] failed pushing to: 
	I1107 23:50:53.353573   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:53.353585   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:53.353897   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:53.353923   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:53.353921   61089 main.go:141] libmachine: (no-preload-883054) DBG | Closing plugin on server side
	I1107 23:50:53.353947   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:53.353961   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:53.355978   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:53.355993   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:53.356016   61089 main.go:141] libmachine: (no-preload-883054) DBG | Closing plugin on server side
	I1107 23:50:53.406742   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1107 23:50:53.406767   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1107 23:50:53.465761   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1107 23:50:53.465789   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1107 23:50:53.550399   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1107 23:50:53.550423   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1107 23:50:53.590814   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1107 23:50:53.590844   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1107 23:50:53.610284   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1107 23:50:53.610311   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1107 23:50:53.629016   61089 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 23:50:53.629043   61089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1107 23:50:53.652178   61089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 23:50:55.360353   61089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.301630269s)
	I1107 23:50:55.360404   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:55.360420   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:55.360537   61089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.24033186s)
	I1107 23:50:55.360557   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:55.360567   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:55.360731   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:55.360769   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:55.360783   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:55.360796   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:55.360963   61089 main.go:141] libmachine: (no-preload-883054) DBG | Closing plugin on server side
	I1107 23:50:55.360997   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:55.361006   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:55.361012   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:55.361029   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:55.361115   61089 main.go:141] libmachine: (no-preload-883054) DBG | Closing plugin on server side
	I1107 23:50:55.361154   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:55.361169   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:55.362988   61089 main.go:141] libmachine: (no-preload-883054) DBG | Closing plugin on server side
	I1107 23:50:55.363005   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:55.363019   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:55.373789   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:55.373815   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:55.374116   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:55.374136   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:55.376824   61089 node_ready.go:58] node "no-preload-883054" has status "Ready":"False"
	I1107 23:50:55.522042   61089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.174734725s)
	I1107 23:50:55.522103   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:55.522122   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:55.522447   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:55.522470   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:55.522484   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:55.522497   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:55.522709   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:55.522723   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:55.522734   61089 addons.go:467] Verifying addon metrics-server=true in "no-preload-883054"
	I1107 23:50:56.082040   61089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.429803888s)
	I1107 23:50:56.082106   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:56.082129   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:56.082509   61089 main.go:141] libmachine: (no-preload-883054) DBG | Closing plugin on server side
	I1107 23:50:56.082583   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:56.082600   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:56.082616   61089 main.go:141] libmachine: Making call to close driver server
	I1107 23:50:56.082627   61089 main.go:141] libmachine: (no-preload-883054) Calling .Close
	I1107 23:50:56.082917   61089 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:50:56.082941   61089 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:50:56.085397   61089 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-883054 addons enable metrics-server	
	
	
	I1107 23:50:56.087315   61089 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1107 23:50:51.734592   61281 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I1107 23:50:51.734649   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetIP
	I1107 23:50:51.737820   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:50:51.738281   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a8:34", ip: ""} in network mk-old-k8s-version-729146: {Iface:virbr3 ExpiryTime:2023-11-08 00:50:40 +0000 UTC Type:0 Mac:52:54:00:5c:a8:34 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:old-k8s-version-729146 Clientid:01:52:54:00:5c:a8:34}
	I1107 23:50:51.738331   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined IP address 192.168.61.191 and MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:50:51.738556   61281 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1107 23:50:51.742682   61281 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:50:51.757278   61281 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 23:50:51.757342   61281 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:50:51.778527   61281 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I1107 23:50:51.778565   61281 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1107 23:50:51.778623   61281 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1107 23:50:51.788705   61281 ssh_runner.go:195] Run: which lz4
	I1107 23:50:51.792474   61281 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:50:51.796300   61281 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:50:51.796334   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I1107 23:50:53.388025   61281 docker.go:635] Took 1.595601 seconds to copy over tarball
	I1107 23:50:53.388129   61281 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:50:56.088891   61089 addons.go:502] enable addons completed in 3.280526902s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1107 23:50:55.485495   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:50:55.486134   61616 main.go:141] libmachine: (embed-certs-692502) DBG | unable to find current IP address of domain embed-certs-692502 in network mk-embed-certs-692502
	I1107 23:50:55.486164   61616 main.go:141] libmachine: (embed-certs-692502) DBG | I1107 23:50:55.486038   61759 retry.go:31] will retry after 1.159540509s: waiting for machine to come up
	I1107 23:50:56.646587   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:50:56.647302   61616 main.go:141] libmachine: (embed-certs-692502) DBG | unable to find current IP address of domain embed-certs-692502 in network mk-embed-certs-692502
	I1107 23:50:56.647346   61616 main.go:141] libmachine: (embed-certs-692502) DBG | I1107 23:50:56.647246   61759 retry.go:31] will retry after 1.73013372s: waiting for machine to come up
	I1107 23:50:58.378630   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:50:58.379281   61616 main.go:141] libmachine: (embed-certs-692502) DBG | unable to find current IP address of domain embed-certs-692502 in network mk-embed-certs-692502
	I1107 23:50:58.379311   61616 main.go:141] libmachine: (embed-certs-692502) DBG | I1107 23:50:58.379227   61759 retry.go:31] will retry after 2.167123118s: waiting for machine to come up
	I1107 23:50:55.486749   61980 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 23:50:55.486802   61980 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 23:50:55.486811   61980 cache.go:56] Caching tarball of preloaded images
	I1107 23:50:55.486908   61980 preload.go:174] Found /home/jenkins/minikube-integration/17585-9672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 23:50:55.486923   61980 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 23:50:55.487071   61980 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/config.json ...
	I1107 23:50:55.487319   61980 start.go:365] acquiring machines lock for default-k8s-diff-port-385734: {Name:mk3e692b91ecc5d91969800a7d207628f9be44b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:50:56.382937   61281 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994776857s)
	I1107 23:50:56.382968   61281 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:50:56.423014   61281 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1107 23:50:56.432627   61281 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3100 bytes)
	I1107 23:50:56.451913   61281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:50:56.567468   61281 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 23:50:59.852431   61281 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.284927965s)
	I1107 23:50:59.852533   61281 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:50:59.876476   61281 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	registry.k8s.io/pause:3.1
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 23:50:59.876500   61281 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1107 23:50:59.876509   61281 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 23:50:59.878960   61281 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:50:59.878991   61281 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1107 23:50:59.879002   61281 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1107 23:50:59.879041   61281 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1107 23:50:59.879113   61281 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1107 23:50:59.879152   61281 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1107 23:50:59.878961   61281 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1107 23:50:59.879196   61281 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1107 23:50:59.880479   61281 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1107 23:50:59.880479   61281 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1107 23:50:59.880486   61281 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1107 23:50:59.880528   61281 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1107 23:50:59.880588   61281 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:50:59.880591   61281 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1107 23:50:59.880486   61281 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1107 23:50:59.880980   61281 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1107 23:51:00.027661   61281 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1107 23:51:00.027719   61281 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1107 23:51:00.037726   61281 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1107 23:51:00.048707   61281 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1107 23:51:00.050958   61281 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1107 23:51:00.053282   61281 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1107 23:51:00.066696   61281 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1107 23:51:00.066906   61281 docker.go:323] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1107 23:51:00.066956   61281 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I1107 23:51:00.066852   61281 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1107 23:51:00.067020   61281 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.2
	I1107 23:51:00.067078   61281 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I1107 23:51:00.085476   61281 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1107 23:51:00.096487   61281 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1107 23:51:00.096602   61281 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1107 23:51:00.096666   61281 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1107 23:51:00.179214   61281 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1107 23:51:00.179262   61281 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1107 23:51:00.179323   61281 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1107 23:51:00.179425   61281 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1107 23:51:00.179452   61281 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1107 23:51:00.179477   61281 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1107 23:51:00.179588   61281 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1107 23:51:00.179661   61281 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1107 23:51:00.179713   61281 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1107 23:51:00.179734   61281 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1107 23:51:00.179760   61281 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I1107 23:51:00.179823   61281 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1107 23:51:00.227372   61281 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1107 23:51:00.227484   61281 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1107 23:51:00.227552   61281 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1107 23:51:00.526038   61281 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:51:00.554109   61281 cache_images.go:92] LoadImages completed in 677.581914ms
	W1107 23:51:00.554209   61281 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1107 23:51:00.554279   61281 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 23:51:00.594068   61281 cni.go:84] Creating CNI manager for ""
	I1107 23:51:00.594115   61281 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1107 23:51:00.594141   61281 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:51:00.594193   61281 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.191 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-729146 NodeName:old-k8s-version-729146 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1107 23:51:00.594377   61281 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-729146"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-729146
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.191:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:51:00.594491   61281 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-729146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-729146 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:51:00.594558   61281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1107 23:51:00.608030   61281 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:51:00.608109   61281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:51:00.620319   61281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
	I1107 23:51:00.643067   61281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:51:00.665555   61281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I1107 23:51:00.688165   61281 ssh_runner.go:195] Run: grep 192.168.61.191	control-plane.minikube.internal$ /etc/hosts
	I1107 23:51:00.693316   61281 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:51:00.708413   61281 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146 for IP: 192.168.61.191
	I1107 23:51:00.708460   61281 certs.go:190] acquiring lock for shared ca certs: {Name:mkae01d77fc83079b31fa0cfd00a77c051ede193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:00.708643   61281 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9672/.minikube/ca.key
	I1107 23:51:00.708708   61281 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.key
	I1107 23:51:00.708808   61281 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.key
	I1107 23:51:00.708885   61281 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/apiserver.key.9d195e5e
	I1107 23:51:00.708949   61281 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/proxy-client.key
	I1107 23:51:00.709118   61281 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866.pem (1338 bytes)
	W1107 23:51:00.709160   61281 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866_empty.pem, impossibly tiny 0 bytes
	I1107 23:51:00.709176   61281 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:51:00.709213   61281 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem (1082 bytes)
	I1107 23:51:00.709261   61281 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:51:00.709297   61281 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/key.pem (1679 bytes)
	I1107 23:51:00.709390   61281 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem (1708 bytes)
	I1107 23:51:00.710278   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:51:00.737163   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 23:51:00.764912   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:51:00.791418   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:51:00.823647   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:51:00.852082   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:51:00.876883   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:51:00.903427   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:51:00.931275   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem --> /usr/share/ca-certificates/168662.pem (1708 bytes)
	I1107 23:51:00.957597   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:51:00.981943   61281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866.pem --> /usr/share/ca-certificates/16866.pem (1338 bytes)
	I1107 23:51:01.008963   61281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:51:01.029221   61281 ssh_runner.go:195] Run: openssl version
	I1107 23:51:01.037018   61281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:51:01.052137   61281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:01.058674   61281 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:01.058794   61281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:01.066754   61281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:51:01.081771   61281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16866.pem && ln -fs /usr/share/ca-certificates/16866.pem /etc/ssl/certs/16866.pem"
	I1107 23:51:01.093583   61281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16866.pem
	I1107 23:51:01.098877   61281 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:06 /usr/share/ca-certificates/16866.pem
	I1107 23:51:01.098973   61281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16866.pem
	I1107 23:51:01.105395   61281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16866.pem /etc/ssl/certs/51391683.0"
	I1107 23:51:01.118602   61281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168662.pem && ln -fs /usr/share/ca-certificates/168662.pem /etc/ssl/certs/168662.pem"
	I1107 23:51:01.131768   61281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168662.pem
	I1107 23:51:01.138255   61281 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:06 /usr/share/ca-certificates/168662.pem
	I1107 23:51:01.138326   61281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168662.pem
	I1107 23:51:01.146127   61281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168662.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:51:01.160741   61281 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:51:01.166645   61281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1107 23:51:01.173276   61281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1107 23:51:01.179395   61281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1107 23:51:01.185198   61281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1107 23:51:01.191080   61281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1107 23:51:01.197160   61281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1107 23:51:01.203515   61281 kubeadm.go:404] StartCluster: {Name:old-k8s-version-729146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-729146 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:51:01.203687   61281 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 23:51:01.225456   61281 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:51:01.235658   61281 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1107 23:51:01.235685   61281 kubeadm.go:636] restartCluster start
	I1107 23:51:01.235741   61281 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 23:51:01.245382   61281 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:01.245945   61281 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-729146" does not appear in /home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:51:01.246199   61281 kubeconfig.go:146] "old-k8s-version-729146" context is missing from /home/jenkins/minikube-integration/17585-9672/kubeconfig - will repair!
	I1107 23:51:01.246681   61281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9672/kubeconfig: {Name:mk1460bde29620caf14dc9f78463d79ec8617f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:01.248018   61281 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 23:51:01.257419   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:01.257481   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:01.269885   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:01.269907   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:01.269975   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:01.281093   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:50:59.005573   61089 node_ready.go:58] node "no-preload-883054" has status "Ready":"False"
	I1107 23:50:59.883942   61089 node_ready.go:49] node "no-preload-883054" has status "Ready":"True"
	I1107 23:50:59.883961   61089 node_ready.go:38] duration metric: took 6.530576006s waiting for node "no-preload-883054" to be "Ready" ...
	I1107 23:50:59.883969   61089 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:50:59.894318   61089 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jrq8h" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:00.548734   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:00.549403   61616 main.go:141] libmachine: (embed-certs-692502) DBG | unable to find current IP address of domain embed-certs-692502 in network mk-embed-certs-692502
	I1107 23:51:00.549430   61616 main.go:141] libmachine: (embed-certs-692502) DBG | I1107 23:51:00.549337   61759 retry.go:31] will retry after 3.505590216s: waiting for machine to come up
	I1107 23:51:04.058871   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:04.059284   61616 main.go:141] libmachine: (embed-certs-692502) DBG | unable to find current IP address of domain embed-certs-692502 in network mk-embed-certs-692502
	I1107 23:51:04.059310   61616 main.go:141] libmachine: (embed-certs-692502) DBG | I1107 23:51:04.059260   61759 retry.go:31] will retry after 3.374453725s: waiting for machine to come up
	I1107 23:51:01.781739   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:01.781856   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:01.796576   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:02.281776   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:02.281867   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:02.299358   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:02.781885   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:02.781970   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:02.794117   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:03.282028   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:03.282104   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:03.294991   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:03.781421   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:03.781521   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:03.794333   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:04.281923   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:04.281996   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:04.294054   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:04.781662   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:04.781781   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:04.794066   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:05.281797   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:05.281866   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:05.295373   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:05.782038   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:05.782132   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:05.794277   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:06.281859   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:06.281960   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:06.294126   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:01.918242   61089 pod_ready.go:92] pod "coredns-5dd5756b68-jrq8h" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:01.918280   61089 pod_ready.go:81] duration metric: took 2.02392317s waiting for pod "coredns-5dd5756b68-jrq8h" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:01.918293   61089 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-883054" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:01.925491   61089 pod_ready.go:92] pod "etcd-no-preload-883054" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:01.925520   61089 pod_ready.go:81] duration metric: took 7.219302ms waiting for pod "etcd-no-preload-883054" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:01.925534   61089 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-883054" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:01.932801   61089 pod_ready.go:92] pod "kube-apiserver-no-preload-883054" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:01.932834   61089 pod_ready.go:81] duration metric: took 7.290863ms waiting for pod "kube-apiserver-no-preload-883054" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:01.932849   61089 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-883054" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:01.940109   61089 pod_ready.go:92] pod "kube-controller-manager-no-preload-883054" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:01.940134   61089 pod_ready.go:81] duration metric: took 7.277114ms waiting for pod "kube-controller-manager-no-preload-883054" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:01.940147   61089 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jp2ww" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:02.282639   61089 pod_ready.go:92] pod "kube-proxy-jp2ww" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:02.282672   61089 pod_ready.go:81] duration metric: took 342.515876ms waiting for pod "kube-proxy-jp2ww" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:02.282688   61089 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-883054" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:02.673018   61089 pod_ready.go:92] pod "kube-scheduler-no-preload-883054" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:02.673050   61089 pod_ready.go:81] duration metric: took 390.352442ms waiting for pod "kube-scheduler-no-preload-883054" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:02.673064   61089 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:04.987294   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:07.437993   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.438525   61616 main.go:141] libmachine: (embed-certs-692502) Found IP for machine: 192.168.72.92
	I1107 23:51:07.438550   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has current primary IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.438559   61616 main.go:141] libmachine: (embed-certs-692502) Reserving static IP address...
	I1107 23:51:07.439089   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "embed-certs-692502", mac: "52:54:00:7f:2e:94", ip: "192.168.72.92"} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:07.439119   61616 main.go:141] libmachine: (embed-certs-692502) Reserved static IP address: 192.168.72.92
	I1107 23:51:07.439140   61616 main.go:141] libmachine: (embed-certs-692502) DBG | skip adding static IP to network mk-embed-certs-692502 - found existing host DHCP lease matching {name: "embed-certs-692502", mac: "52:54:00:7f:2e:94", ip: "192.168.72.92"}
	I1107 23:51:07.439155   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Getting to WaitForSSH function...
	I1107 23:51:07.439166   61616 main.go:141] libmachine: (embed-certs-692502) Waiting for SSH to be available...
	I1107 23:51:07.441459   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.441843   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:07.441878   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.441965   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Using SSH client type: external
	I1107 23:51:07.441993   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa (-rw-------)
	I1107 23:51:07.442056   61616 main.go:141] libmachine: (embed-certs-692502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1107 23:51:07.442075   61616 main.go:141] libmachine: (embed-certs-692502) DBG | About to run SSH command:
	I1107 23:51:07.442091   61616 main.go:141] libmachine: (embed-certs-692502) DBG | exit 0
	I1107 23:51:07.545397   61616 main.go:141] libmachine: (embed-certs-692502) DBG | SSH cmd err, output: <nil>: 
	I1107 23:51:07.545761   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetConfigRaw
	I1107 23:51:07.546398   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetIP
	I1107 23:51:07.548935   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.549292   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:07.549331   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.549516   61616 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502/config.json ...
	I1107 23:51:07.549710   61616 machine.go:88] provisioning docker machine ...
	I1107 23:51:07.549728   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:07.549893   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetMachineName
	I1107 23:51:07.550052   61616 buildroot.go:166] provisioning hostname "embed-certs-692502"
	I1107 23:51:07.550070   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetMachineName
	I1107 23:51:07.550283   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:07.552768   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.553122   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:07.553159   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.553302   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:07.553510   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:07.553666   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:07.553784   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:07.553941   61616 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:07.554315   61616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I1107 23:51:07.554330   61616 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-692502 && echo "embed-certs-692502" | sudo tee /etc/hostname
	I1107 23:51:07.701226   61616 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-692502
	
	I1107 23:51:07.701263   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:07.704243   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.704690   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:07.704726   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.704945   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:07.705153   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:07.705368   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:07.705544   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:07.705739   61616 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:07.706073   61616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I1107 23:51:07.706093   61616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-692502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-692502/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-692502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:51:07.855543   61616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:51:07.855590   61616 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9672/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9672/.minikube}
	I1107 23:51:07.855630   61616 buildroot.go:174] setting up certificates
	I1107 23:51:07.855640   61616 provision.go:83] configureAuth start
	I1107 23:51:07.855649   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetMachineName
	I1107 23:51:07.855992   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetIP
	I1107 23:51:07.858776   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.858992   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:07.859023   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.859198   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:07.861464   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.861785   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:07.861805   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.861987   61616 provision.go:138] copyHostCerts
	I1107 23:51:07.862030   61616 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9672/.minikube/ca.pem, removing ...
	I1107 23:51:07.862048   61616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9672/.minikube/ca.pem
	I1107 23:51:07.862098   61616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9672/.minikube/ca.pem (1082 bytes)
	I1107 23:51:07.862176   61616 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9672/.minikube/cert.pem, removing ...
	I1107 23:51:07.862185   61616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9672/.minikube/cert.pem
	I1107 23:51:07.862210   61616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9672/.minikube/cert.pem (1123 bytes)
	I1107 23:51:07.862279   61616 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9672/.minikube/key.pem, removing ...
	I1107 23:51:07.862294   61616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9672/.minikube/key.pem
	I1107 23:51:07.862318   61616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9672/.minikube/key.pem (1679 bytes)
	I1107 23:51:07.862367   61616 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca-key.pem org=jenkins.embed-certs-692502 san=[192.168.72.92 192.168.72.92 localhost 127.0.0.1 minikube embed-certs-692502]
	I1107 23:51:07.957455   61616 provision.go:172] copyRemoteCerts
	I1107 23:51:07.957516   61616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:51:07.957550   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:07.959823   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.960111   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:07.960130   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:07.960351   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:07.960565   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:07.960767   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:07.960916   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:08.053836   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 23:51:08.075474   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1107 23:51:08.097877   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:51:08.120315   61616 provision.go:86] duration metric: configureAuth took 264.655209ms
	I1107 23:51:08.120351   61616 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:51:08.120602   61616 config.go:182] Loaded profile config "embed-certs-692502": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:51:08.120647   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:08.120991   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:08.124156   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:08.124597   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:08.124625   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:08.124812   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:08.125008   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:08.125197   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:08.125379   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:08.125571   61616 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:08.126068   61616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I1107 23:51:08.126086   61616 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 23:51:08.259298   61616 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1107 23:51:08.259319   61616 buildroot.go:70] root file system type: tmpfs
	I1107 23:51:08.259448   61616 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 23:51:08.259481   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:08.262550   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:08.262950   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:08.262979   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:08.263193   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:08.263435   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:08.263631   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:08.263821   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:08.263992   61616 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:08.264494   61616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I1107 23:51:08.264608   61616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 23:51:08.415621   61616 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 23:51:08.415673   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:08.418565   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:08.418929   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:08.418960   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:08.419159   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:08.419332   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:08.419530   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:08.419683   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:08.419857   61616 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:08.420251   61616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I1107 23:51:08.420278   61616 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 23:51:09.365446   61616 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1107 23:51:09.365484   61616 machine.go:91] provisioned docker machine in 1.815761082s
	I1107 23:51:09.365499   61616 start.go:300] post-start starting for "embed-certs-692502" (driver="kvm2")
	I1107 23:51:09.365515   61616 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:51:09.365535   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:09.365930   61616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:51:09.365967   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:09.368777   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.369099   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:09.369135   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.369290   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:09.369513   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:09.369671   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:09.369799   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:09.462901   61616 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:51:09.466901   61616 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:51:09.466930   61616 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9672/.minikube/addons for local assets ...
	I1107 23:51:09.467007   61616 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9672/.minikube/files for local assets ...
	I1107 23:51:09.467098   61616 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem -> 168662.pem in /etc/ssl/certs
	I1107 23:51:09.467213   61616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:51:09.475541   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem --> /etc/ssl/certs/168662.pem (1708 bytes)
	I1107 23:51:09.503446   61616 start.go:303] post-start completed in 137.931258ms
	I1107 23:51:09.503474   61616 fix.go:56] fixHost completed within 20.461001713s
	I1107 23:51:09.503499   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:09.506732   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.507168   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:09.507197   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.507414   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:09.507657   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:09.507826   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:09.507968   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:09.508137   61616 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:09.508489   61616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I1107 23:51:09.508508   61616 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:51:09.650063   61980 start.go:369] acquired machines lock for "default-k8s-diff-port-385734" in 14.162702874s
	I1107 23:51:09.650118   61980 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:51:09.650128   61980 fix.go:54] fixHost starting: 
	I1107 23:51:09.650633   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:09.650693   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:09.670812   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I1107 23:51:09.671211   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:09.671770   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:09.671793   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:09.672134   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:09.672335   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:09.672495   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetState
	I1107 23:51:09.674186   61980 fix.go:102] recreateIfNeeded on default-k8s-diff-port-385734: state=Stopped err=<nil>
	I1107 23:51:09.674214   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	W1107 23:51:09.674380   61980 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:51:09.746070   61980 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-385734" ...
	I1107 23:51:09.864169   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Start
	I1107 23:51:09.864479   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Ensuring networks are active...
	I1107 23:51:09.865727   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Ensuring network default is active
	I1107 23:51:09.866237   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Ensuring network mk-default-k8s-diff-port-385734 is active
	I1107 23:51:09.866773   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Getting domain xml...
	I1107 23:51:09.867620   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Creating domain...
	I1107 23:51:06.782120   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:06.782211   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:06.794619   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:07.281196   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:07.281270   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:07.294823   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:07.781324   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:07.781462   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:07.795206   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:08.282154   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:08.282233   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:08.295383   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:08.781479   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:08.781591   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:08.798081   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:09.281453   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:09.281544   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:09.295787   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:09.782018   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:09.782081   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:09.795821   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:10.281445   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:10.281537   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:10.295843   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:10.781430   61281 api_server.go:166] Checking apiserver status ...
	I1107 23:51:10.781527   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:10.797230   61281 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:11.257520   61281 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1107 23:51:11.257553   61281 kubeadm.go:1128] stopping kube-system containers ...
	I1107 23:51:11.257632   61281 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 23:51:11.284109   61281 docker.go:469] Stopping containers: [f7d36f82e82a 3336323861f2 e0850594ac17 dd434c70f1a4 14ae9921b78f daca8e16f339 70c344c74bd2 381c9febe570 5add8a846fd7 ddd4cce1319b e5d102a1a6a0 6ea3e3345d15 8f3433f73cab 732d9b32516d]
	I1107 23:51:11.284195   61281 ssh_runner.go:195] Run: docker stop f7d36f82e82a 3336323861f2 e0850594ac17 dd434c70f1a4 14ae9921b78f daca8e16f339 70c344c74bd2 381c9febe570 5add8a846fd7 ddd4cce1319b e5d102a1a6a0 6ea3e3345d15 8f3433f73cab 732d9b32516d
	I1107 23:51:11.311375   61281 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 23:51:11.330866   61281 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:51:11.342197   61281 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:51:11.342268   61281 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:51:06.988406   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:09.489057   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:09.649911   61616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699401069.597960387
	
	I1107 23:51:09.649939   61616 fix.go:206] guest clock: 1699401069.597960387
	I1107 23:51:09.649948   61616 fix.go:219] Guest: 2023-11-07 23:51:09.597960387 +0000 UTC Remote: 2023-11-07 23:51:09.503479147 +0000 UTC m=+35.011623194 (delta=94.48124ms)
	I1107 23:51:09.649972   61616 fix.go:190] guest clock delta is within tolerance: 94.48124ms
	I1107 23:51:09.649978   61616 start.go:83] releasing machines lock for "embed-certs-692502", held for 20.60778786s
	I1107 23:51:09.650007   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:09.650288   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetIP
	I1107 23:51:09.653667   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.654177   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:09.654213   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.654378   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:09.654990   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:09.655177   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:09.655260   61616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:51:09.655307   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:09.655402   61616 ssh_runner.go:195] Run: cat /version.json
	I1107 23:51:09.655430   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:09.658197   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.658414   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.658641   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:09.658693   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.658850   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:09.658853   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:09.658910   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:09.659108   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:09.659293   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:09.659295   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:09.659460   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:09.659467   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:09.659609   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:09.659756   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:09.777813   61616 ssh_runner.go:195] Run: systemctl --version
	I1107 23:51:09.783866   61616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1107 23:51:09.789372   61616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:51:09.789470   61616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:51:09.806936   61616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:51:09.806969   61616 start.go:472] detecting cgroup driver to use...
	I1107 23:51:09.807133   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:51:09.825781   61616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1107 23:51:09.838225   61616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1107 23:51:09.848397   61616 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1107 23:51:09.848478   61616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1107 23:51:09.859791   61616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 23:51:09.871023   61616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1107 23:51:09.883248   61616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 23:51:09.896806   61616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:51:09.910521   61616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1107 23:51:09.923412   61616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:51:09.934044   61616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:51:09.945971   61616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:51:10.086903   61616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 23:51:10.108373   61616 start.go:472] detecting cgroup driver to use...
	I1107 23:51:10.108458   61616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 23:51:10.122358   61616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:51:10.134948   61616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:51:10.151711   61616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:51:10.165118   61616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 23:51:10.183085   61616 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1107 23:51:10.243697   61616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 23:51:10.256328   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:51:10.275608   61616 ssh_runner.go:195] Run: which cri-dockerd
	I1107 23:51:10.279579   61616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 23:51:10.288492   61616 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1107 23:51:10.307985   61616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 23:51:10.411525   61616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 23:51:10.543651   61616 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1107 23:51:10.543846   61616 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1107 23:51:10.565250   61616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:51:10.677286   61616 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 23:51:12.214551   61616 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.537224509s)
	I1107 23:51:12.214643   61616 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 23:51:12.388239   61616 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1107 23:51:12.550485   61616 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 23:51:12.691108   61616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:51:12.830335   61616 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1107 23:51:12.857615   61616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:51:13.040681   61616 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 23:51:13.141409   61616 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 23:51:13.141531   61616 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 23:51:13.148890   61616 start.go:540] Will wait 60s for crictl version
	I1107 23:51:13.148996   61616 ssh_runner.go:195] Run: which crictl
	I1107 23:51:13.154081   61616 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:51:13.227177   61616 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1107 23:51:13.227286   61616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 23:51:13.260689   61616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 23:51:13.296035   61616 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
	I1107 23:51:13.296145   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetIP
	I1107 23:51:13.299884   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:13.300438   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:13.300669   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:13.300833   61616 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1107 23:51:13.306568   61616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:51:13.325541   61616 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 23:51:13.325619   61616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:51:13.356306   61616 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 23:51:13.356335   61616 docker.go:601] Images already preloaded, skipping extraction
	I1107 23:51:13.356412   61616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:51:13.384783   61616 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 23:51:13.384831   61616 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:51:13.384924   61616 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 23:51:13.423940   61616 cni.go:84] Creating CNI manager for ""
	I1107 23:51:13.424076   61616 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 23:51:13.424106   61616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:51:13.424137   61616 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.92 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-692502 NodeName:embed-certs-692502 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:51:13.424328   61616 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-692502"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:51:13.424439   61616 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-692502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-692502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:51:13.424515   61616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:51:13.438971   61616 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:51:13.439047   61616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:51:13.451995   61616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1107 23:51:13.475656   61616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:51:13.499439   61616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1107 23:51:13.523012   61616 ssh_runner.go:195] Run: grep 192.168.72.92	control-plane.minikube.internal$ /etc/hosts
	I1107 23:51:13.528016   61616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:51:13.546760   61616 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502 for IP: 192.168.72.92
	I1107 23:51:13.546801   61616 certs.go:190] acquiring lock for shared ca certs: {Name:mkae01d77fc83079b31fa0cfd00a77c051ede193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:13.546982   61616 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9672/.minikube/ca.key
	I1107 23:51:13.547048   61616 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.key
	I1107 23:51:13.547145   61616 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502/client.key
	I1107 23:51:13.547218   61616 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502/apiserver.key.15f51d8c
	I1107 23:51:13.547281   61616 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502/proxy-client.key
	I1107 23:51:13.547415   61616 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866.pem (1338 bytes)
	W1107 23:51:13.547459   61616 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866_empty.pem, impossibly tiny 0 bytes
	I1107 23:51:13.547471   61616 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:51:13.547511   61616 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem (1082 bytes)
	I1107 23:51:13.547546   61616 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:51:13.547585   61616 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/key.pem (1679 bytes)
	I1107 23:51:13.547652   61616 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem (1708 bytes)
	I1107 23:51:13.548485   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:51:13.585559   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 23:51:13.625035   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:51:13.664082   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/embed-certs-692502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:51:13.704209   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:51:13.741926   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:51:13.782151   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:51:13.824736   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:51:13.858166   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866.pem --> /usr/share/ca-certificates/16866.pem (1338 bytes)
	I1107 23:51:13.890880   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem --> /usr/share/ca-certificates/168662.pem (1708 bytes)
	I1107 23:51:13.923463   61616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:51:13.956694   61616 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:51:13.979951   61616 ssh_runner.go:195] Run: openssl version
	I1107 23:51:13.987431   61616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16866.pem && ln -fs /usr/share/ca-certificates/16866.pem /etc/ssl/certs/16866.pem"
	I1107 23:51:14.001505   61616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16866.pem
	I1107 23:51:14.008101   61616 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:06 /usr/share/ca-certificates/16866.pem
	I1107 23:51:14.008197   61616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16866.pem
	I1107 23:51:14.015794   61616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16866.pem /etc/ssl/certs/51391683.0"
	I1107 23:51:14.029499   61616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168662.pem && ln -fs /usr/share/ca-certificates/168662.pem /etc/ssl/certs/168662.pem"
	I1107 23:51:14.046296   61616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168662.pem
	I1107 23:51:14.057835   61616 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:06 /usr/share/ca-certificates/168662.pem
	I1107 23:51:14.057994   61616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168662.pem
	I1107 23:51:14.065303   61616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168662.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:51:14.079092   61616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:51:14.093166   61616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:14.099837   61616 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:14.099900   61616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:14.107259   61616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:51:14.119622   61616 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:51:14.128139   61616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1107 23:51:14.137340   61616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1107 23:51:14.144806   61616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1107 23:51:14.151708   61616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1107 23:51:14.160053   61616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1107 23:51:14.167497   61616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1107 23:51:14.174801   61616 kubeadm.go:404] StartCluster: {Name:embed-certs-692502 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-692502 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.92 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:51:14.174949   61616 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 23:51:14.198341   61616 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:51:14.211418   61616 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1107 23:51:14.211445   61616 kubeadm.go:636] restartCluster start
	I1107 23:51:14.211496   61616 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 23:51:14.222877   61616 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:14.223781   61616 kubeconfig.go:135] verify returned: extract IP: "embed-certs-692502" does not appear in /home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:51:14.224310   61616 kubeconfig.go:146] "embed-certs-692502" context is missing from /home/jenkins/minikube-integration/17585-9672/kubeconfig - will repair!
	I1107 23:51:14.225057   61616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9672/kubeconfig: {Name:mk1460bde29620caf14dc9f78463d79ec8617f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:14.227106   61616 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 23:51:14.240112   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:14.240219   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:14.254790   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:14.254821   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:14.254877   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:14.271399   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:11.267902   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Waiting to get IP...
	I1107 23:51:11.269032   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:11.269688   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:11.269727   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:11.269623   62110 retry.go:31] will retry after 306.76074ms: waiting for machine to come up
	I1107 23:51:11.578468   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:11.579055   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:11.579088   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:11.579008   62110 retry.go:31] will retry after 285.833672ms: waiting for machine to come up
	I1107 23:51:11.866584   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:11.867167   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:11.867198   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:11.867118   62110 retry.go:31] will retry after 336.84672ms: waiting for machine to come up
	I1107 23:51:12.206029   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:12.206632   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:12.206672   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:12.206562   62110 retry.go:31] will retry after 445.729832ms: waiting for machine to come up
	I1107 23:51:12.654525   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:12.655253   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:12.655283   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:12.655197   62110 retry.go:31] will retry after 621.913208ms: waiting for machine to come up
	I1107 23:51:13.279293   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:13.279984   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:13.280011   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:13.279900   62110 retry.go:31] will retry after 875.756879ms: waiting for machine to come up
	I1107 23:51:14.157200   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:14.158038   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:14.158208   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:14.158157   62110 retry.go:31] will retry after 836.246416ms: waiting for machine to come up
	I1107 23:51:14.996784   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:14.997466   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:14.997642   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:14.997572   62110 retry.go:31] will retry after 1.087550319s: waiting for machine to come up
	I1107 23:51:11.353966   61281 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 23:51:11.353997   61281 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:11.489650   61281 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:12.385068   61281 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:12.653284   61281 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:12.755175   61281 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:12.874021   61281 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:51:12.874097   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:12.892828   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:13.418561   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:13.918471   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:14.418111   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:14.917713   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:15.418676   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:15.440265   61281 api_server.go:72] duration metric: took 2.566236358s to wait for apiserver process to appear ...
	I1107 23:51:15.440296   61281 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:51:15.440315   61281 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I1107 23:51:11.988428   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:13.995423   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:16.487676   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:14.772282   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:14.772543   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:14.792995   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:15.271702   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:15.271810   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:15.290074   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:15.771628   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:15.771741   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:15.786719   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:16.272274   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:16.272357   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:16.287397   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:16.771961   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:16.772064   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:16.787052   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:17.272464   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:17.272559   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:17.284650   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:17.771831   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:17.771911   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:17.787949   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:18.272506   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:18.272596   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:18.287937   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:18.771488   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:18.771584   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:18.784493   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:19.272524   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:19.272611   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:19.288635   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:16.086676   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:16.087184   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:16.087206   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:16.087109   62110 retry.go:31] will retry after 1.674137229s: waiting for machine to come up
	I1107 23:51:17.763469   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:17.764035   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:17.764051   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:17.764005   62110 retry.go:31] will retry after 2.285076153s: waiting for machine to come up
	I1107 23:51:20.050510   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:20.051132   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:20.051158   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:20.051086   62110 retry.go:31] will retry after 2.42518474s: waiting for machine to come up
	I1107 23:51:20.018378   61281 api_server.go:279] https://192.168.61.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:51:20.018453   61281 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:51:20.018472   61281 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I1107 23:51:20.040289   61281 api_server.go:279] https://192.168.61.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:51:20.040319   61281 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:51:20.540786   61281 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I1107 23:51:20.558744   61281 api_server.go:279] https://192.168.61.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1107 23:51:20.558771   61281 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1107 23:51:21.040925   61281 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I1107 23:51:21.051036   61281 api_server.go:279] https://192.168.61.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1107 23:51:21.051060   61281 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1107 23:51:18.489383   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:20.489983   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:19.772264   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:19.772346   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:19.788804   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:20.272084   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:20.272176   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:20.289132   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:20.772227   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:20.772327   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:20.787403   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:21.272181   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:21.272297   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:21.288480   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:21.772141   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:21.772217   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:21.786539   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:22.271832   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:22.271959   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:22.289896   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:22.772404   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:22.772545   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:22.785009   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:23.271511   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:23.271615   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:23.287034   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:23.771597   61616 api_server.go:166] Checking apiserver status ...
	I1107 23:51:23.771690   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:23.784461   61616 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:24.240716   61616 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1107 23:51:24.240749   61616 kubeadm.go:1128] stopping kube-system containers ...
	I1107 23:51:24.240807   61616 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 23:51:24.267056   61616 docker.go:469] Stopping containers: [99ba7f17fb45 78ed2cd08d53 b4ac011b6c2b 2760f5ebb7b7 372c67a6115f 52177cad7ac0 5c22def5ff99 1e7785121f25 95cf3500a123 6f53c27de891 8595bc7ddcac fd3718b5e1f3 393e900d4ba2 c1ae40db1e4d 0debde6ae722 f6f5c4eb44b9]
	I1107 23:51:24.267158   61616 ssh_runner.go:195] Run: docker stop 99ba7f17fb45 78ed2cd08d53 b4ac011b6c2b 2760f5ebb7b7 372c67a6115f 52177cad7ac0 5c22def5ff99 1e7785121f25 95cf3500a123 6f53c27de891 8595bc7ddcac fd3718b5e1f3 393e900d4ba2 c1ae40db1e4d 0debde6ae722 f6f5c4eb44b9
	I1107 23:51:24.289005   61616 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 23:51:24.307485   61616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:51:24.318130   61616 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:51:24.318219   61616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:51:24.328361   61616 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 23:51:24.328391   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:24.463926   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:21.541069   61281 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I1107 23:51:21.548413   61281 api_server.go:279] https://192.168.61.191:8443/healthz returned 200:
	ok
	I1107 23:51:21.557069   61281 api_server.go:141] control plane version: v1.16.0
	I1107 23:51:21.557096   61281 api_server.go:131] duration metric: took 6.116792842s to wait for apiserver health ...
	I1107 23:51:21.557104   61281 cni.go:84] Creating CNI manager for ""
	I1107 23:51:21.557115   61281 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1107 23:51:21.557121   61281 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:51:21.566315   61281 system_pods.go:59] 7 kube-system pods found
	I1107 23:51:21.566349   61281 system_pods.go:61] "coredns-5644d7b6d9-bpf97" [edadb693-65cd-4556-9337-a6afbb3ac4d1] Running
	I1107 23:51:21.566358   61281 system_pods.go:61] "etcd-old-k8s-version-729146" [8e4624db-7b54-4a17-8291-d24a2e16c0f7] Running
	I1107 23:51:21.566366   61281 system_pods.go:61] "kube-apiserver-old-k8s-version-729146" [7d2ddf54-7126-4a04-908a-de45bf368c20] Running
	I1107 23:51:21.566378   61281 system_pods.go:61] "kube-controller-manager-old-k8s-version-729146" [de4e9d97-f1cd-4fd7-a593-24715a610529] Pending
	I1107 23:51:21.566390   61281 system_pods.go:61] "kube-proxy-t2qc9" [b0cd0440-9e09-4cc9-86ba-73073144929c] Running
	I1107 23:51:21.566401   61281 system_pods.go:61] "kube-scheduler-old-k8s-version-729146" [67d7ed93-087a-43ea-a363-5409cfc9afb2] Running
	I1107 23:51:21.566421   61281 system_pods.go:61] "storage-provisioner" [5bf7dc21-570e-4b93-9a0c-be49a6a60a4d] Running
	I1107 23:51:21.566435   61281 system_pods.go:74] duration metric: took 9.307047ms to wait for pod list to return data ...
	I1107 23:51:21.566448   61281 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:51:21.570226   61281 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:51:21.570262   61281 node_conditions.go:123] node cpu capacity is 2
	I1107 23:51:21.570275   61281 node_conditions.go:105] duration metric: took 3.816127ms to run NodePressure ...
	I1107 23:51:21.570296   61281 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:21.872130   61281 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1107 23:51:21.877796   61281 kubeadm.go:787] kubelet initialised
	I1107 23:51:21.877818   61281 kubeadm.go:788] duration metric: took 5.667819ms waiting for restarted kubelet to initialise ...
	I1107 23:51:21.877827   61281 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:21.883281   61281 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:21.888886   61281 pod_ready.go:92] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:21.888906   61281 pod_ready.go:81] duration metric: took 5.598343ms waiting for pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:21.888914   61281 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:21.895695   61281 pod_ready.go:92] pod "etcd-old-k8s-version-729146" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:21.895726   61281 pod_ready.go:81] duration metric: took 6.804646ms waiting for pod "etcd-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:21.895738   61281 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:21.900628   61281 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-729146" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:21.900652   61281 pod_ready.go:81] duration metric: took 4.904575ms waiting for pod "kube-apiserver-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:21.900664   61281 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:24.373542   61281 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-729146" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:24.373574   61281 pod_ready.go:81] duration metric: took 2.47290147s waiting for pod "kube-controller-manager-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:24.373588   61281 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t2qc9" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:24.386044   61281 pod_ready.go:92] pod "kube-proxy-t2qc9" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:24.386071   61281 pod_ready.go:81] duration metric: took 12.4745ms waiting for pod "kube-proxy-t2qc9" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:24.386082   61281 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:24.761033   61281 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-729146" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:24.761073   61281 pod_ready.go:81] duration metric: took 374.980002ms waiting for pod "kube-scheduler-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:24.761087   61281 pod_ready.go:38] duration metric: took 2.883252631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:24.761107   61281 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:51:24.771875   61281 ops.go:34] apiserver oom_adj: -16
	I1107 23:51:24.771900   61281 kubeadm.go:640] restartCluster took 23.536207587s
	I1107 23:51:24.771909   61281 kubeadm.go:406] StartCluster complete in 23.56840189s
	I1107 23:51:24.771929   61281 settings.go:142] acquiring lock: {Name:mkb3bf85efa91260bd7f9666ea4b7d286a4ec4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:24.771999   61281 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:51:24.773502   61281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9672/kubeconfig: {Name:mk1460bde29620caf14dc9f78463d79ec8617f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:24.773751   61281 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:51:24.773773   61281 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:51:24.773847   61281 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-729146"
	I1107 23:51:24.773858   61281 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-729146"
	I1107 23:51:24.773871   61281 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-729146"
	I1107 23:51:24.773872   61281 addons.go:69] Setting dashboard=true in profile "old-k8s-version-729146"
	W1107 23:51:24.773880   61281 addons.go:240] addon storage-provisioner should already be in state true
	I1107 23:51:24.773889   61281 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-729146"
	I1107 23:51:24.773907   61281 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-729146"
	W1107 23:51:24.773915   61281 addons.go:240] addon metrics-server should already be in state true
	I1107 23:51:24.773922   61281 host.go:66] Checking if "old-k8s-version-729146" exists ...
	I1107 23:51:24.773952   61281 host.go:66] Checking if "old-k8s-version-729146" exists ...
	I1107 23:51:24.773892   61281 addons.go:231] Setting addon dashboard=true in "old-k8s-version-729146"
	W1107 23:51:24.773973   61281 addons.go:240] addon dashboard should already be in state true
	I1107 23:51:24.773995   61281 config.go:182] Loaded profile config "old-k8s-version-729146": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1107 23:51:24.774029   61281 host.go:66] Checking if "old-k8s-version-729146" exists ...
	I1107 23:51:24.773876   61281 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-729146"
	I1107 23:51:24.774378   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.774384   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.774413   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.774420   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.774424   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.774451   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.774496   61281 cache.go:107] acquiring lock: {Name:mk2e98e54594103823e5c3f2774763d418478a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:51:24.774746   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.774780   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.774772   61281 cache.go:115] /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1107 23:51:24.774960   61281 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 908.491µs
	I1107 23:51:24.774973   61281 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1107 23:51:24.774981   61281 cache.go:87] Successfully saved all images to host disk.
	I1107 23:51:24.775175   61281 config.go:182] Loaded profile config "old-k8s-version-729146": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1107 23:51:24.775504   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.775545   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.792222   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33895
	I1107 23:51:24.792722   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.792771   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I1107 23:51:24.793032   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I1107 23:51:24.793124   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.793289   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.793314   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.793510   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.793653   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.793678   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.793854   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.794072   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.794089   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.794153   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.794334   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetState
	I1107 23:51:24.794528   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.794572   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.794638   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.805820   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.805883   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.805824   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.805972   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.824571   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I1107 23:51:24.824843   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43443
	I1107 23:51:24.825108   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.825230   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.830649   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I1107 23:51:24.830684   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46431
	I1107 23:51:24.830654   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I1107 23:51:24.830768   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.830783   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.830883   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.830893   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.831333   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.831355   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.831412   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.831416   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.831936   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.831949   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.831965   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.831976   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetState
	I1107 23:51:24.831981   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.831997   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.832020   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .DriverName
	I1107 23:51:24.832329   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.832433   61281 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:51:24.832451   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHHostname
	I1107 23:51:24.832586   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.832597   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.833216   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.833257   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.833525   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.834150   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .DriverName
	I1107 23:51:24.834216   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetState
	I1107 23:51:24.836324   61281 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1107 23:51:24.835056   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.837393   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.837945   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a8:34", ip: ""} in network mk-old-k8s-version-729146: {Iface:virbr3 ExpiryTime:2023-11-08 00:50:40 +0000 UTC Type:0 Mac:52:54:00:5c:a8:34 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:old-k8s-version-729146 Clientid:01:52:54:00:5c:a8:34}
	I1107 23:51:24.837977   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined IP address 192.168.61.191 and MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.837880   61281 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-729146"
	W1107 23:51:24.838031   61281 addons.go:240] addon default-storageclass should already be in state true
	I1107 23:51:24.838061   61281 host.go:66] Checking if "old-k8s-version-729146" exists ...
	I1107 23:51:24.838510   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.838544   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.838752   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetState
	I1107 23:51:24.838814   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHPort
	I1107 23:51:24.840306   61281 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1107 23:51:24.839014   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHKeyPath
	I1107 23:51:24.840285   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .DriverName
	I1107 23:51:24.841711   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1107 23:51:24.841723   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1107 23:51:24.841742   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHHostname
	I1107 23:51:24.844278   61281 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:51:24.842800   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHUsername
	I1107 23:51:24.845435   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.845678   61281 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:51:24.845694   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:51:24.845724   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHHostname
	I1107 23:51:24.846077   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHPort
	I1107 23:51:24.846088   61281 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/old-k8s-version-729146/id_rsa Username:docker}
	I1107 23:51:24.846135   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a8:34", ip: ""} in network mk-old-k8s-version-729146: {Iface:virbr3 ExpiryTime:2023-11-08 00:50:40 +0000 UTC Type:0 Mac:52:54:00:5c:a8:34 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:old-k8s-version-729146 Clientid:01:52:54:00:5c:a8:34}
	I1107 23:51:24.846150   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined IP address 192.168.61.191 and MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.846318   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHKeyPath
	I1107 23:51:24.846455   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHUsername
	I1107 23:51:24.846590   61281 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/old-k8s-version-729146/id_rsa Username:docker}
	I1107 23:51:24.849329   61281 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-729146" context rescaled to 1 replicas
	I1107 23:51:24.849384   61281 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 23:51:24.851307   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.851313   61281 out.go:177] * Verifying Kubernetes components...
	I1107 23:51:24.852881   61281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:51:24.851717   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a8:34", ip: ""} in network mk-old-k8s-version-729146: {Iface:virbr3 ExpiryTime:2023-11-08 00:50:40 +0000 UTC Type:0 Mac:52:54:00:5c:a8:34 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:old-k8s-version-729146 Clientid:01:52:54:00:5c:a8:34}
	I1107 23:51:24.852960   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined IP address 192.168.61.191 and MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.851930   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHPort
	I1107 23:51:24.853152   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHKeyPath
	I1107 23:51:24.853310   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHUsername
	I1107 23:51:24.853458   61281 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/old-k8s-version-729146/id_rsa Username:docker}
	I1107 23:51:24.857723   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42009
	I1107 23:51:24.858062   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.858427   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.858445   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.858465   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46669
	I1107 23:51:24.858799   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.858895   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.859131   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetState
	I1107 23:51:24.859355   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.859374   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.859769   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.860341   61281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:24.860378   61281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:24.860763   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .DriverName
	I1107 23:51:24.862567   61281 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1107 23:51:22.478940   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:22.479529   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:22.479569   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:22.479468   62110 retry.go:31] will retry after 3.600667949s: waiting for machine to come up
	I1107 23:51:24.864116   61281 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 23:51:24.864130   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 23:51:24.864146   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHHostname
	I1107 23:51:24.867321   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.867717   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a8:34", ip: ""} in network mk-old-k8s-version-729146: {Iface:virbr3 ExpiryTime:2023-11-08 00:50:40 +0000 UTC Type:0 Mac:52:54:00:5c:a8:34 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:old-k8s-version-729146 Clientid:01:52:54:00:5c:a8:34}
	I1107 23:51:24.867741   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined IP address 192.168.61.191 and MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.867934   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHPort
	I1107 23:51:24.868093   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHKeyPath
	I1107 23:51:24.868236   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHUsername
	I1107 23:51:24.868333   61281 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/old-k8s-version-729146/id_rsa Username:docker}
	I1107 23:51:24.878403   61281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44901
	I1107 23:51:24.878871   61281 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:24.879314   61281 main.go:141] libmachine: Using API Version  1
	I1107 23:51:24.879342   61281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:24.879656   61281 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:24.879846   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetState
	I1107 23:51:24.881491   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .DriverName
	I1107 23:51:24.881763   61281 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:51:24.881781   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:51:24.881800   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHHostname
	I1107 23:51:24.884516   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.884914   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a8:34", ip: ""} in network mk-old-k8s-version-729146: {Iface:virbr3 ExpiryTime:2023-11-08 00:50:40 +0000 UTC Type:0 Mac:52:54:00:5c:a8:34 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:old-k8s-version-729146 Clientid:01:52:54:00:5c:a8:34}
	I1107 23:51:24.884936   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | domain old-k8s-version-729146 has defined IP address 192.168.61.191 and MAC address 52:54:00:5c:a8:34 in network mk-old-k8s-version-729146
	I1107 23:51:24.885173   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHPort
	I1107 23:51:24.885370   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHKeyPath
	I1107 23:51:24.885525   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .GetSSHUsername
	I1107 23:51:24.885673   61281 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/old-k8s-version-729146/id_rsa Username:docker}
	I1107 23:51:25.000062   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1107 23:51:25.000097   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1107 23:51:25.024043   61281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:51:25.076276   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1107 23:51:25.076308   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1107 23:51:25.079331   61281 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 23:51:25.079351   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1107 23:51:25.123763   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1107 23:51:25.123807   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1107 23:51:25.143247   61281 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 23:51:25.143280   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 23:51:25.144339   61281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:51:25.172935   61281 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 23:51:25.172928   61281 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-729146" to be "Ready" ...
	I1107 23:51:25.172978   61281 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I1107 23:51:25.172993   61281 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:51:25.173002   61281 cache_images.go:262] succeeded pushing to: old-k8s-version-729146
	I1107 23:51:25.173014   61281 cache_images.go:263] failed pushing to: 
	I1107 23:51:25.173034   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.173047   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.173419   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.173441   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.173458   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | Closing plugin on server side
	I1107 23:51:25.173460   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.173475   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.175424   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | Closing plugin on server side
	I1107 23:51:25.175426   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.175455   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.176870   61281 node_ready.go:49] node "old-k8s-version-729146" has status "Ready":"True"
	I1107 23:51:25.176894   61281 node_ready.go:38] duration metric: took 3.929146ms waiting for node "old-k8s-version-729146" to be "Ready" ...
	I1107 23:51:25.176906   61281 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:25.190659   61281 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:25.200491   61281 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:51:25.200520   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 23:51:25.209013   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1107 23:51:25.209043   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1107 23:51:25.259395   61281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:51:25.259897   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1107 23:51:25.259921   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1107 23:51:25.316072   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1107 23:51:25.316096   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1107 23:51:25.429791   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1107 23:51:25.429821   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1107 23:51:25.570745   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1107 23:51:25.570784   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1107 23:51:25.624242   61281 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 23:51:25.624278   61281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1107 23:51:25.700239   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.700287   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.701492   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.701519   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.701529   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.701538   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.701496   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | Closing plugin on server side
	I1107 23:51:25.701801   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.701835   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.706106   61281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 23:51:25.772137   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.772167   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.772553   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.772575   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.772585   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.772596   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.772602   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | Closing plugin on server side
	I1107 23:51:25.772849   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.772870   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.780185   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.780205   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.780484   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.780502   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.893131   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.893158   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.893483   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.893504   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.893518   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:25.893533   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:25.895431   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | Closing plugin on server side
	I1107 23:51:25.895460   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:25.895476   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:25.895491   61281 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-729146"
	I1107 23:51:26.276278   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:26.276307   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:26.276634   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:26.276657   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:26.276668   61281 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:26.276678   61281 main.go:141] libmachine: (old-k8s-version-729146) Calling .Close
	I1107 23:51:26.276907   61281 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:26.276922   61281 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:26.276949   61281 main.go:141] libmachine: (old-k8s-version-729146) DBG | Closing plugin on server side
	I1107 23:51:26.278798   61281 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-729146 addons enable metrics-server	
	
	
	I1107 23:51:26.280601   61281 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1107 23:51:26.282122   61281 addons.go:502] enable addons completed in 1.50835824s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1107 23:51:22.992969   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:25.489715   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:25.446063   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:25.670674   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:25.779801   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:25.857599   61616 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:51:25.857704   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:25.871596   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:26.389853   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:26.889808   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:27.389488   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:27.889864   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:28.389248   61616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:28.425200   61616 api_server.go:72] duration metric: took 2.567599952s to wait for apiserver process to appear ...
	I1107 23:51:28.425226   61616 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:51:28.425241   61616 api_server.go:253] Checking apiserver healthz at https://192.168.72.92:8443/healthz ...
	I1107 23:51:28.425870   61616 api_server.go:269] stopped: https://192.168.72.92:8443/healthz: Get "https://192.168.72.92:8443/healthz": dial tcp 192.168.72.92:8443: connect: connection refused
	I1107 23:51:28.425907   61616 api_server.go:253] Checking apiserver healthz at https://192.168.72.92:8443/healthz ...
	I1107 23:51:28.426256   61616 api_server.go:269] stopped: https://192.168.72.92:8443/healthz: Get "https://192.168.72.92:8443/healthz": dial tcp 192.168.72.92:8443: connect: connection refused
	I1107 23:51:28.926496   61616 api_server.go:253] Checking apiserver healthz at https://192.168.72.92:8443/healthz ...
	I1107 23:51:26.082770   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:26.083458   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | unable to find current IP address of domain default-k8s-diff-port-385734 in network mk-default-k8s-diff-port-385734
	I1107 23:51:26.083481   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | I1107 23:51:26.083371   62110 retry.go:31] will retry after 3.163253584s: waiting for machine to come up
	I1107 23:51:29.250431   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.250918   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Found IP for machine: 192.168.39.88
	I1107 23:51:29.250946   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Reserving static IP address...
	I1107 23:51:29.250979   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has current primary IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.251429   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-385734", mac: "52:54:00:35:5e:7b", ip: "192.168.39.88"} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:29.251465   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Reserved static IP address: 192.168.39.88
	I1107 23:51:29.251488   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | skip adding static IP to network mk-default-k8s-diff-port-385734 - found existing host DHCP lease matching {name: "default-k8s-diff-port-385734", mac: "52:54:00:35:5e:7b", ip: "192.168.39.88"}
	I1107 23:51:29.251511   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Getting to WaitForSSH function...
	I1107 23:51:29.251528   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Waiting for SSH to be available...
	I1107 23:51:29.253280   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.253593   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:29.253626   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.253778   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Using SSH client type: external
	I1107 23:51:29.253817   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa (-rw-------)
	I1107 23:51:29.253877   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1107 23:51:29.253913   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | About to run SSH command:
	I1107 23:51:29.253936   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | exit 0
	I1107 23:51:29.341588   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | SSH cmd err, output: <nil>: 
	I1107 23:51:29.342041   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetConfigRaw
	I1107 23:51:29.342721   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetIP
	I1107 23:51:29.345380   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.345709   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:29.345738   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.345976   61980 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/config.json ...
	I1107 23:51:29.346209   61980 machine.go:88] provisioning docker machine ...
	I1107 23:51:29.346235   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:29.346435   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetMachineName
	I1107 23:51:29.346594   61980 buildroot.go:166] provisioning hostname "default-k8s-diff-port-385734"
	I1107 23:51:29.346614   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetMachineName
	I1107 23:51:29.346773   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:29.349178   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.349575   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:29.349610   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.349790   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:29.349985   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:29.350183   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:29.350354   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:29.350544   61980 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:29.350954   61980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1107 23:51:29.350971   61980 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-385734 && echo "default-k8s-diff-port-385734" | sudo tee /etc/hostname
	I1107 23:51:29.479514   61980 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-385734
	
	I1107 23:51:29.479543   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:29.482488   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.482942   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:29.482978   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.483121   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:29.483384   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:29.483571   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:29.483731   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:29.483914   61980 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:29.484246   61980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1107 23:51:29.484268   61980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-385734' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-385734/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-385734' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:51:29.607378   61980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:51:29.607406   61980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9672/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9672/.minikube}
	I1107 23:51:29.607429   61980 buildroot.go:174] setting up certificates
	I1107 23:51:29.607444   61980 provision.go:83] configureAuth start
	I1107 23:51:29.607456   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetMachineName
	I1107 23:51:29.607723   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetIP
	I1107 23:51:29.610569   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.610974   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:29.611004   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.611188   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:29.613825   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.614253   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:29.614278   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.614519   61980 provision.go:138] copyHostCerts
	I1107 23:51:29.614587   61980 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9672/.minikube/ca.pem, removing ...
	I1107 23:51:29.614599   61980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9672/.minikube/ca.pem
	I1107 23:51:29.614676   61980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9672/.minikube/ca.pem (1082 bytes)
	I1107 23:51:29.614850   61980 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9672/.minikube/cert.pem, removing ...
	I1107 23:51:29.614861   61980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9672/.minikube/cert.pem
	I1107 23:51:29.614886   61980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9672/.minikube/cert.pem (1123 bytes)
	I1107 23:51:29.614970   61980 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9672/.minikube/key.pem, removing ...
	I1107 23:51:29.614982   61980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9672/.minikube/key.pem
	I1107 23:51:29.615010   61980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9672/.minikube/key.pem (1679 bytes)
	I1107 23:51:29.615098   61980 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-385734 san=[192.168.39.88 192.168.39.88 localhost 127.0.0.1 minikube default-k8s-diff-port-385734]
	I1107 23:51:29.844441   61980 provision.go:172] copyRemoteCerts
	I1107 23:51:29.844496   61980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:51:29.844517   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:29.847812   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.848197   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:29.848230   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:29.848483   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:29.848710   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:29.848871   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:29.849033   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:29.938042   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:51:29.961872   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 23:51:29.985762   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1107 23:51:30.011819   61980 provision.go:86] duration metric: configureAuth took 404.362847ms
	I1107 23:51:30.011844   61980 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:51:30.012024   61980 config.go:182] Loaded profile config "default-k8s-diff-port-385734": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:51:30.012047   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:30.012357   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:30.015015   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:30.015377   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:30.015407   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:30.015556   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:30.015746   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:30.015890   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:30.016022   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:30.016162   61980 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:30.016489   61980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1107 23:51:30.016504   61980 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 23:51:30.131129   61980 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1107 23:51:30.131153   61980 buildroot.go:70] root file system type: tmpfs
	I1107 23:51:30.131287   61980 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 23:51:30.131317   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:30.134344   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:30.134769   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:30.134808   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:30.135045   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:30.135257   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:30.135481   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:30.135628   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:30.135813   61980 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:30.136279   61980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1107 23:51:30.136385   61980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 23:51:30.262244   61980 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 23:51:30.262288   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:30.265411   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:30.265791   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:30.265822   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:30.266028   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:30.266209   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:30.266440   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:30.266596   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:30.266778   61980 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:30.267108   61980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1107 23:51:30.267125   61980 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 23:51:27.466609   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:29.468271   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:27.989866   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:30.488716   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:32.374094   61616 api_server.go:279] https://192.168.72.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:51:32.374131   61616 api_server.go:103] status: https://192.168.72.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:51:32.374149   61616 api_server.go:253] Checking apiserver healthz at https://192.168.72.92:8443/healthz ...
	I1107 23:51:32.404251   61616 api_server.go:279] https://192.168.72.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:51:32.404286   61616 api_server.go:103] status: https://192.168.72.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:51:32.426433   61616 api_server.go:253] Checking apiserver healthz at https://192.168.72.92:8443/healthz ...
	I1107 23:51:32.506851   61616 api_server.go:279] https://192.168.72.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1107 23:51:32.506886   61616 api_server.go:103] status: https://192.168.72.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1107 23:51:32.926353   61616 api_server.go:253] Checking apiserver healthz at https://192.168.72.92:8443/healthz ...
	I1107 23:51:32.931057   61616 api_server.go:279] https://192.168.72.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1107 23:51:32.931090   61616 api_server.go:103] status: https://192.168.72.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1107 23:51:33.426612   61616 api_server.go:253] Checking apiserver healthz at https://192.168.72.92:8443/healthz ...
	I1107 23:51:33.437033   61616 api_server.go:279] https://192.168.72.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1107 23:51:33.437065   61616 api_server.go:103] status: https://192.168.72.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1107 23:51:33.926612   61616 api_server.go:253] Checking apiserver healthz at https://192.168.72.92:8443/healthz ...
	I1107 23:51:33.932013   61616 api_server.go:279] https://192.168.72.92:8443/healthz returned 200:
	ok
	I1107 23:51:33.942708   61616 api_server.go:141] control plane version: v1.28.3
	I1107 23:51:33.942735   61616 api_server.go:131] duration metric: took 5.517503328s to wait for apiserver health ...
	I1107 23:51:33.942744   61616 cni.go:84] Creating CNI manager for ""
	I1107 23:51:33.942759   61616 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 23:51:33.944974   61616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1107 23:51:33.946436   61616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1107 23:51:33.974407   61616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1107 23:51:34.017265   61616 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:51:34.039009   61616 system_pods.go:59] 8 kube-system pods found
	I1107 23:51:34.039047   61616 system_pods.go:61] "coredns-5dd5756b68-zc4dl" [7fdab3f9-c7a2-4413-b03f-928707207d27] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:51:34.039057   61616 system_pods.go:61] "etcd-embed-certs-692502" [c4fc907a-6738-4656-8ea5-c9f32ac5c4f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 23:51:34.039069   61616 system_pods.go:61] "kube-apiserver-embed-certs-692502" [3a1ac3fd-a8b7-43a7-8a06-d0ae7ab4dcec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 23:51:34.039077   61616 system_pods.go:61] "kube-controller-manager-embed-certs-692502" [753abdb8-e3f9-420b-b8b1-62ef4c2ec58b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 23:51:34.039089   61616 system_pods.go:61] "kube-proxy-zfjqb" [fc074099-1e75-4a8b-8f2c-d4c37c458de0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:51:34.039101   61616 system_pods.go:61] "kube-scheduler-embed-certs-692502" [fd5a6031-2b2d-4d99-8166-8ba2172f859f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 23:51:34.039110   61616 system_pods.go:61] "metrics-server-57f55c9bc5-b9wv4" [4716cc76-3c11-4937-8286-a92940f3d245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:51:34.039122   61616 system_pods.go:61] "storage-provisioner" [5f211136-bc10-45c6-b12d-0db98097b1c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:51:34.039132   61616 system_pods.go:74] duration metric: took 21.844187ms to wait for pod list to return data ...
	I1107 23:51:34.039144   61616 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:51:34.044339   61616 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:51:34.044380   61616 node_conditions.go:123] node cpu capacity is 2
	I1107 23:51:34.044392   61616 node_conditions.go:105] duration metric: took 5.242012ms to run NodePressure ...
	I1107 23:51:34.044411   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:34.522099   61616 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1107 23:51:34.532301   61616 kubeadm.go:787] kubelet initialised
	I1107 23:51:34.532322   61616 kubeadm.go:788] duration metric: took 10.198643ms waiting for restarted kubelet to initialise ...
	I1107 23:51:34.532329   61616 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:31.163590   61980 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1107 23:51:31.163628   61980 machine.go:91] provisioned docker machine in 1.817401224s
	I1107 23:51:31.163643   61980 start.go:300] post-start starting for "default-k8s-diff-port-385734" (driver="kvm2")
	I1107 23:51:31.163657   61980 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:51:31.163677   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:31.164034   61980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:51:31.164067   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:31.167411   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.167807   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:31.167837   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.168052   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:31.168269   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:31.168466   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:31.168682   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:31.258787   61980 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:51:31.263384   61980 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:51:31.263414   61980 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9672/.minikube/addons for local assets ...
	I1107 23:51:31.263487   61980 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9672/.minikube/files for local assets ...
	I1107 23:51:31.263583   61980 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem -> 168662.pem in /etc/ssl/certs
	I1107 23:51:31.263676   61980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:51:31.275729   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem --> /etc/ssl/certs/168662.pem (1708 bytes)
	I1107 23:51:31.299523   61980 start.go:303] post-start completed in 135.859004ms
	I1107 23:51:31.299556   61980 fix.go:56] fixHost completed within 21.649427889s
	I1107 23:51:31.299582   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:31.302479   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.302880   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:31.302918   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.303050   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:31.303287   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:31.303489   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:31.303644   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:31.303854   61980 main.go:141] libmachine: Using SSH client type: native
	I1107 23:51:31.304299   61980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1107 23:51:31.304314   61980 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:51:31.417959   61980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699401091.357550933
	
	I1107 23:51:31.417988   61980 fix.go:206] guest clock: 1699401091.357550933
	I1107 23:51:31.417995   61980 fix.go:219] Guest: 2023-11-07 23:51:31.357550933 +0000 UTC Remote: 2023-11-07 23:51:31.299560458 +0000 UTC m=+36.019571697 (delta=57.990475ms)
	I1107 23:51:31.418017   61980 fix.go:190] guest clock delta is within tolerance: 57.990475ms
	I1107 23:51:31.418024   61980 start.go:83] releasing machines lock for "default-k8s-diff-port-385734", held for 21.767932216s
	I1107 23:51:31.418046   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:31.418314   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetIP
	I1107 23:51:31.421666   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.422043   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:31.422078   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.422267   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:31.422870   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:31.423081   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:31.423186   61980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:51:31.423243   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:31.423372   61980 ssh_runner.go:195] Run: cat /version.json
	I1107 23:51:31.423409   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:31.426226   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.426495   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.426664   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:31.426691   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.426801   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:31.426829   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:31.426871   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:31.427026   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:31.427099   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:31.427188   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:31.427314   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:31.427393   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:31.427541   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:31.427552   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:31.510959   61980 ssh_runner.go:195] Run: systemctl --version
	I1107 23:51:31.537157   61980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1107 23:51:31.544403   61980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:51:31.544483   61980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:51:31.563020   61980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:51:31.563049   61980 start.go:472] detecting cgroup driver to use...
	I1107 23:51:31.563209   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:51:31.583870   61980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1107 23:51:31.595320   61980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1107 23:51:31.606864   61980 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1107 23:51:31.606967   61980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1107 23:51:31.616952   61980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 23:51:31.627709   61980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1107 23:51:31.638236   61980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 23:51:31.648734   61980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:51:31.659937   61980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1107 23:51:31.670966   61980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:51:31.680887   61980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:51:31.690305   61980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:51:31.825124   61980 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 23:51:31.845847   61980 start.go:472] detecting cgroup driver to use...
	I1107 23:51:31.845965   61980 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 23:51:31.860376   61980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:51:31.881042   61980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:51:31.901192   61980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:51:31.918486   61980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 23:51:31.931856   61980 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1107 23:51:31.964872   61980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 23:51:31.979971   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:51:31.998226   61980 ssh_runner.go:195] Run: which cri-dockerd
	I1107 23:51:32.002203   61980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 23:51:32.011905   61980 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1107 23:51:32.027961   61980 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 23:51:32.147873   61980 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 23:51:32.268513   61980 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1107 23:51:32.268679   61980 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1107 23:51:32.286431   61980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:51:32.411266   61980 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 23:51:33.936860   61980 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.525557693s)
	I1107 23:51:33.936987   61980 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 23:51:34.080382   61980 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1107 23:51:34.200355   61980 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 23:51:34.335661   61980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:51:34.451922   61980 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1107 23:51:34.479036   61980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:51:34.665382   61980 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 23:51:34.783922   61980 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 23:51:34.784001   61980 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 23:51:34.791962   61980 start.go:540] Will wait 60s for crictl version
	I1107 23:51:34.792122   61980 ssh_runner.go:195] Run: which crictl
	I1107 23:51:34.797570   61980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:51:34.876352   61980 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1107 23:51:34.876427   61980 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 23:51:34.908002   61980 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 23:51:34.939892   61980 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
	I1107 23:51:34.939947   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetIP
	I1107 23:51:34.943269   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:34.943689   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:34.943724   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:34.944002   61980 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:51:34.949346   61980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:51:34.971058   61980 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 23:51:34.971129   61980 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:51:34.994568   61980 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 23:51:34.994599   61980 docker.go:601] Images already preloaded, skipping extraction
	I1107 23:51:34.994678   61980 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:51:35.020084   61980 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 23:51:35.020116   61980 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:51:35.020184   61980 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 23:51:35.051389   61980 cni.go:84] Creating CNI manager for ""
	I1107 23:51:35.051424   61980 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 23:51:35.051445   61980 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:51:35.051474   61980 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.88 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-385734 NodeName:default-k8s-diff-port-385734 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:51:35.051661   61980 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.88
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-385734"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:51:35.051767   61980 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-385734 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-385734 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1107 23:51:35.051846   61980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:51:35.064100   61980 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:51:35.064177   61980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:51:35.074208   61980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (390 bytes)
	I1107 23:51:35.092202   61980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:51:35.109740   61980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2114 bytes)
	I1107 23:51:35.126465   61980 ssh_runner.go:195] Run: grep 192.168.39.88	control-plane.minikube.internal$ /etc/hosts
	I1107 23:51:35.130180   61980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:51:35.142319   61980 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734 for IP: 192.168.39.88
	I1107 23:51:35.142357   61980 certs.go:190] acquiring lock for shared ca certs: {Name:mkae01d77fc83079b31fa0cfd00a77c051ede193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:35.142531   61980 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9672/.minikube/ca.key
	I1107 23:51:35.142589   61980 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.key
	I1107 23:51:35.142686   61980 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/client.key
	I1107 23:51:35.142782   61980 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/apiserver.key.e1aac5bc
	I1107 23:51:35.142851   61980 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/proxy-client.key
	I1107 23:51:35.143008   61980 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866.pem (1338 bytes)
	W1107 23:51:35.143052   61980 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866_empty.pem, impossibly tiny 0 bytes
	I1107 23:51:35.143069   61980 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:51:35.143101   61980 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/ca.pem (1082 bytes)
	I1107 23:51:35.143131   61980 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:51:35.143183   61980 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/certs/home/jenkins/minikube-integration/17585-9672/.minikube/certs/key.pem (1679 bytes)
	I1107 23:51:35.143233   61980 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem (1708 bytes)
	I1107 23:51:35.143858   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:51:35.167548   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:51:35.190474   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:51:35.215910   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/default-k8s-diff-port-385734/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 23:51:35.240035   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:51:35.264879   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:51:35.287606   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:51:35.310549   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:51:35.334674   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/ssl/certs/168662.pem --> /usr/share/ca-certificates/168662.pem (1708 bytes)
	I1107 23:51:34.545179   61616 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:34.558951   61616 pod_ready.go:97] node "embed-certs-692502" hosting pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.558975   61616 pod_ready.go:81] duration metric: took 13.770418ms waiting for pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:34.558984   61616 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-692502" hosting pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.558990   61616 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:34.568042   61616 pod_ready.go:97] node "embed-certs-692502" hosting pod "etcd-embed-certs-692502" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.568072   61616 pod_ready.go:81] duration metric: took 9.067301ms waiting for pod "etcd-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:34.568090   61616 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-692502" hosting pod "etcd-embed-certs-692502" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.568102   61616 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:34.591336   61616 pod_ready.go:97] node "embed-certs-692502" hosting pod "kube-apiserver-embed-certs-692502" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.591371   61616 pod_ready.go:81] duration metric: took 23.257329ms waiting for pod "kube-apiserver-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:34.591383   61616 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-692502" hosting pod "kube-apiserver-embed-certs-692502" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.591393   61616 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:34.604601   61616 pod_ready.go:97] node "embed-certs-692502" hosting pod "kube-controller-manager-embed-certs-692502" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.604632   61616 pod_ready.go:81] duration metric: took 13.226737ms waiting for pod "kube-controller-manager-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:34.604645   61616 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-692502" hosting pod "kube-controller-manager-embed-certs-692502" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.604653   61616 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zfjqb" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:34.932892   61616 pod_ready.go:97] node "embed-certs-692502" hosting pod "kube-proxy-zfjqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.932985   61616 pod_ready.go:81] duration metric: took 328.32001ms waiting for pod "kube-proxy-zfjqb" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:34.933003   61616 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-692502" hosting pod "kube-proxy-zfjqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:34.933019   61616 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:35.327551   61616 pod_ready.go:97] node "embed-certs-692502" hosting pod "kube-scheduler-embed-certs-692502" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:35.327588   61616 pod_ready.go:81] duration metric: took 394.560028ms waiting for pod "kube-scheduler-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:35.327600   61616 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-692502" hosting pod "kube-scheduler-embed-certs-692502" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:35.327607   61616 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:35.727166   61616 pod_ready.go:97] node "embed-certs-692502" hosting pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:35.727193   61616 pod_ready.go:81] duration metric: took 399.576239ms waiting for pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:35.727202   61616 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-692502" hosting pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:35.727208   61616 pod_ready.go:38] duration metric: took 1.194871498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:35.727225   61616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:51:35.737833   61616 ops.go:34] apiserver oom_adj: -16
	I1107 23:51:35.737863   61616 kubeadm.go:640] restartCluster took 21.526411331s
	I1107 23:51:35.737875   61616 kubeadm.go:406] StartCluster complete in 21.563085167s
	I1107 23:51:35.737895   61616 settings.go:142] acquiring lock: {Name:mkb3bf85efa91260bd7f9666ea4b7d286a4ec4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:35.737996   61616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:51:35.739853   61616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9672/kubeconfig: {Name:mk1460bde29620caf14dc9f78463d79ec8617f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:35.740077   61616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:51:35.740195   61616 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:51:35.740284   61616 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-692502"
	I1107 23:51:35.740298   61616 addons.go:69] Setting dashboard=true in profile "embed-certs-692502"
	I1107 23:51:35.740309   61616 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-692502"
	I1107 23:51:35.740307   61616 addons.go:69] Setting default-storageclass=true in profile "embed-certs-692502"
	I1107 23:51:35.740318   61616 addons.go:231] Setting addon dashboard=true in "embed-certs-692502"
	I1107 23:51:35.740337   61616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-692502"
	W1107 23:51:35.740318   61616 addons.go:240] addon storage-provisioner should already be in state true
	I1107 23:51:35.740353   61616 addons.go:69] Setting metrics-server=true in profile "embed-certs-692502"
	I1107 23:51:35.740393   61616 addons.go:231] Setting addon metrics-server=true in "embed-certs-692502"
	W1107 23:51:35.740407   61616 addons.go:240] addon metrics-server should already be in state true
	W1107 23:51:35.740344   61616 addons.go:240] addon dashboard should already be in state true
	I1107 23:51:35.740548   61616 host.go:66] Checking if "embed-certs-692502" exists ...
	I1107 23:51:35.740394   61616 host.go:66] Checking if "embed-certs-692502" exists ...
	I1107 23:51:35.740287   61616 config.go:182] Loaded profile config "embed-certs-692502": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:51:35.740859   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.740890   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.740472   61616 host.go:66] Checking if "embed-certs-692502" exists ...
	I1107 23:51:35.740997   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.741003   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.741055   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.740974   61616 cache.go:107] acquiring lock: {Name:mk2e98e54594103823e5c3f2774763d418478a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:51:35.741072   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.741166   61616 cache.go:115] /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1107 23:51:35.741183   61616 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 224.445µs
	I1107 23:51:35.741192   61616 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1107 23:51:35.741200   61616 cache.go:87] Successfully saved all images to host disk.
	I1107 23:51:35.741493   61616 config.go:182] Loaded profile config "embed-certs-692502": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:51:35.741549   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.741634   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.741893   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.741927   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.757586   61616 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-692502" context rescaled to 1 replicas
	I1107 23:51:35.757634   61616 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.92 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 23:51:35.761770   61616 out.go:177] * Verifying Kubernetes components...
	I1107 23:51:35.758577   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I1107 23:51:35.759187   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I1107 23:51:35.759246   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I1107 23:51:35.759323   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I1107 23:51:35.760997   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I1107 23:51:35.763387   61616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:51:35.763809   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.763868   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.763930   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.763965   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.764007   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.764336   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.764357   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.764528   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.764547   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.764561   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.764569   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.764585   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.764587   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.764671   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.764687   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.764702   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.764953   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.764987   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.765016   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.765194   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.765275   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetState
	I1107 23:51:35.765503   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.765546   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.765571   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.765627   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.765664   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.765697   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.765857   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetState
	I1107 23:51:35.768899   61616 addons.go:231] Setting addon default-storageclass=true in "embed-certs-692502"
	W1107 23:51:35.768920   61616 addons.go:240] addon default-storageclass should already be in state true
	I1107 23:51:35.768945   61616 host.go:66] Checking if "embed-certs-692502" exists ...
	I1107 23:51:35.769250   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.769282   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.769847   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.769893   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.782564   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I1107 23:51:35.783113   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.783592   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.783609   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.784014   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.784260   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetState
	I1107 23:51:35.784704   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I1107 23:51:35.785292   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.785926   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.785948   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.786289   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:35.788586   61616 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1107 23:51:35.786908   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.789732   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I1107 23:51:35.791802   61616 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1107 23:51:35.790396   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetState
	I1107 23:51:35.790669   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.793511   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1107 23:51:35.793527   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1107 23:51:35.793547   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:35.793969   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.793989   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.794468   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.795126   61616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:35.795172   61616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:35.797155   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:35.799260   61616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:51:35.798252   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.798967   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:35.800622   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:35.800650   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.800707   61616 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:51:35.800732   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:51:35.800752   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:35.801395   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:35.801594   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:35.802064   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:35.804071   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.804508   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:35.804527   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.804760   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:35.804973   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:35.805140   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:35.805269   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:35.814737   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I1107 23:51:35.815137   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.815762   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.815789   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.815847   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I1107 23:51:35.816156   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.816373   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:35.816436   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.816918   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.816932   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.817192   61616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:51:35.817212   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:35.817259   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.817460   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetState
	I1107 23:51:35.818268   61616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43661
	I1107 23:51:35.818736   61616 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:35.819220   61616 main.go:141] libmachine: Using API Version  1
	I1107 23:51:35.819236   61616 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:35.819690   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:35.821929   61616 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1107 23:51:31.472773   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:33.968950   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:32.987376   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:34.990218   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:35.820368   61616 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:35.821204   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.821876   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:35.823411   61616 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 23:51:35.823424   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 23:51:35.823442   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:35.823467   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:35.823510   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:35.823543   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.823631   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:35.823690   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetState
	I1107 23:51:35.823807   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:35.825682   61616 main.go:141] libmachine: (embed-certs-692502) Calling .DriverName
	I1107 23:51:35.826009   61616 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:51:35.826025   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:51:35.826041   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHHostname
	I1107 23:51:35.827111   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.827593   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:35.827606   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.827773   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:35.827898   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:35.828012   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:35.828175   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:35.829431   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.829860   61616 main.go:141] libmachine: (embed-certs-692502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2e:94", ip: ""} in network mk-embed-certs-692502: {Iface:virbr4 ExpiryTime:2023-11-08 00:51:02 +0000 UTC Type:0 Mac:52:54:00:7f:2e:94 Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:embed-certs-692502 Clientid:01:52:54:00:7f:2e:94}
	I1107 23:51:35.829875   61616 main.go:141] libmachine: (embed-certs-692502) DBG | domain embed-certs-692502 has defined IP address 192.168.72.92 and MAC address 52:54:00:7f:2e:94 in network mk-embed-certs-692502
	I1107 23:51:35.830041   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHPort
	I1107 23:51:35.830191   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHKeyPath
	I1107 23:51:35.830285   61616 main.go:141] libmachine: (embed-certs-692502) Calling .GetSSHUsername
	I1107 23:51:35.830491   61616 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/embed-certs-692502/id_rsa Username:docker}
	I1107 23:51:35.951782   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1107 23:51:35.951815   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1107 23:51:35.988317   61616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:51:36.002751   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1107 23:51:36.002781   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1107 23:51:36.010534   61616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:51:36.028981   61616 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 23:51:36.029007   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1107 23:51:36.094676   61616 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 23:51:36.094703   61616 node_ready.go:35] waiting up to 6m0s for node "embed-certs-692502" to be "Ready" ...
	I1107 23:51:36.094825   61616 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 23:51:36.094851   61616 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:51:36.094862   61616 cache_images.go:262] succeeded pushing to: embed-certs-692502
	I1107 23:51:36.094867   61616 cache_images.go:263] failed pushing to: 
	I1107 23:51:36.094891   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:36.094911   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:36.095226   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:36.095243   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:36.095258   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:36.095278   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:36.095637   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:36.095667   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:36.099826   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1107 23:51:36.099842   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1107 23:51:36.119331   61616 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 23:51:36.119356   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 23:51:36.137830   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1107 23:51:36.137851   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1107 23:51:36.155374   61616 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:51:36.155399   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 23:51:36.221681   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1107 23:51:36.221705   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1107 23:51:36.290087   61616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:51:36.302875   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1107 23:51:36.302911   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1107 23:51:36.592374   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1107 23:51:36.592399   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1107 23:51:36.731846   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1107 23:51:36.731868   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1107 23:51:36.822222   61616 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 23:51:36.822244   61616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1107 23:51:36.910265   61616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 23:51:37.773111   61616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.784737346s)
	I1107 23:51:37.773214   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:37.773231   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:37.775378   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Closing plugin on server side
	I1107 23:51:37.775402   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:37.775418   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:37.775461   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:37.775477   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:37.777507   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Closing plugin on server side
	I1107 23:51:37.777559   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:37.777569   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:37.786506   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:37.786546   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:37.786924   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Closing plugin on server side
	I1107 23:51:37.787002   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:37.787026   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:38.130293   61616 node_ready.go:58] node "embed-certs-692502" has status "Ready":"False"
	I1107 23:51:38.234951   61616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.224379686s)
	I1107 23:51:38.235008   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:38.235022   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:38.237097   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Closing plugin on server side
	I1107 23:51:38.237109   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:38.237125   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:38.237136   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:38.237167   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:38.238927   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:38.238948   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:38.238962   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Closing plugin on server side
	I1107 23:51:38.393240   61616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.10310674s)
	I1107 23:51:38.393359   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:38.393380   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:38.393704   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:38.393728   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:38.393738   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:38.393736   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Closing plugin on server side
	I1107 23:51:38.393747   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:38.393991   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:38.394008   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:38.394018   61616 addons.go:467] Verifying addon metrics-server=true in "embed-certs-692502"
	I1107 23:51:38.630429   61616 node_ready.go:49] node "embed-certs-692502" has status "Ready":"True"
	I1107 23:51:38.630462   61616 node_ready.go:38] duration metric: took 2.535731844s waiting for node "embed-certs-692502" to be "Ready" ...
	I1107 23:51:38.630476   61616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:38.636810   61616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:38.837821   61616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.927505498s)
	I1107 23:51:38.837878   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:38.837892   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:38.838366   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Closing plugin on server side
	I1107 23:51:38.838378   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:38.838396   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:38.838414   61616 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:38.838455   61616 main.go:141] libmachine: (embed-certs-692502) Calling .Close
	I1107 23:51:38.838747   61616 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:38.838834   61616 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:38.840749   61616 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-692502 addons enable metrics-server	
	
	
	I1107 23:51:38.838811   61616 main.go:141] libmachine: (embed-certs-692502) DBG | Closing plugin on server side
	I1107 23:51:38.843754   61616 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1107 23:51:38.845123   61616 addons.go:502] enable addons completed in 3.104920829s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1107 23:51:35.358657   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:51:35.383226   61980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9672/.minikube/certs/16866.pem --> /usr/share/ca-certificates/16866.pem (1338 bytes)
	I1107 23:51:35.407127   61980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:51:35.424030   61980 ssh_runner.go:195] Run: openssl version
	I1107 23:51:35.429735   61980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168662.pem && ln -fs /usr/share/ca-certificates/168662.pem /etc/ssl/certs/168662.pem"
	I1107 23:51:35.439624   61980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168662.pem
	I1107 23:51:35.444188   61980 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:06 /usr/share/ca-certificates/168662.pem
	I1107 23:51:35.444255   61980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168662.pem
	I1107 23:51:35.449668   61980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168662.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:51:35.458733   61980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:51:35.469179   61980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:35.474888   61980 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:35.474957   61980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:51:35.480744   61980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:51:35.490607   61980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16866.pem && ln -fs /usr/share/ca-certificates/16866.pem /etc/ssl/certs/16866.pem"
	I1107 23:51:35.499617   61980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16866.pem
	I1107 23:51:35.504510   61980 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:06 /usr/share/ca-certificates/16866.pem
	I1107 23:51:35.504592   61980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16866.pem
	I1107 23:51:35.510404   61980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16866.pem /etc/ssl/certs/51391683.0"
	I1107 23:51:35.521288   61980 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:51:35.527656   61980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1107 23:51:35.535383   61980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1107 23:51:35.541389   61980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1107 23:51:35.547669   61980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1107 23:51:35.553530   61980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1107 23:51:35.559480   61980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1107 23:51:35.565659   61980 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-385734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port
-385734 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.88 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:51:35.565821   61980 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 23:51:35.584992   61980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:51:35.594234   61980 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1107 23:51:35.594261   61980 kubeadm.go:636] restartCluster start
	I1107 23:51:35.594346   61980 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 23:51:35.603557   61980 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:35.604598   61980 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-385734" does not appear in /home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:51:35.605197   61980 kubeconfig.go:146] "default-k8s-diff-port-385734" context is missing from /home/jenkins/minikube-integration/17585-9672/kubeconfig - will repair!
	I1107 23:51:35.606156   61980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9672/kubeconfig: {Name:mk1460bde29620caf14dc9f78463d79ec8617f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:35.608454   61980 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 23:51:35.617046   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:35.617124   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:35.628401   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:35.628430   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:35.628483   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:35.639828   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:36.140491   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:36.140580   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:36.152461   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:36.639974   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:36.640045   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:36.657594   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:37.140096   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:37.140199   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:37.155065   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:37.640569   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:37.640649   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:37.666236   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:38.140948   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:38.141060   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:38.155969   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:38.640445   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:38.640527   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:38.655313   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:39.140895   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:39.140976   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:39.152559   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:39.640712   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:39.640786   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:39.653037   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:40.140698   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:40.140778   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:40.163026   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:36.467918   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:38.468477   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:40.471632   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:37.488503   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:39.489001   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:40.675318   61616 pod_ready.go:102] pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:42.677048   61616 pod_ready.go:102] pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:43.676735   61616 pod_ready.go:92] pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:43.676767   61616 pod_ready.go:81] duration metric: took 5.039926845s waiting for pod "coredns-5dd5756b68-zc4dl" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.676780   61616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.686729   61616 pod_ready.go:92] pod "etcd-embed-certs-692502" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:43.686757   61616 pod_ready.go:81] duration metric: took 9.96895ms waiting for pod "etcd-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.686768   61616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.693838   61616 pod_ready.go:92] pod "kube-apiserver-embed-certs-692502" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:43.693864   61616 pod_ready.go:81] duration metric: took 7.089093ms waiting for pod "kube-apiserver-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.693874   61616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.699512   61616 pod_ready.go:92] pod "kube-controller-manager-embed-certs-692502" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:43.699531   61616 pod_ready.go:81] duration metric: took 5.65083ms waiting for pod "kube-controller-manager-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.699540   61616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zfjqb" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.726556   61616 pod_ready.go:92] pod "kube-proxy-zfjqb" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:43.726580   61616 pod_ready.go:81] duration metric: took 27.034393ms waiting for pod "kube-proxy-zfjqb" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:43.726590   61616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:44.526710   61616 pod_ready.go:92] pod "kube-scheduler-embed-certs-692502" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:44.526739   61616 pod_ready.go:81] duration metric: took 800.141295ms waiting for pod "kube-scheduler-embed-certs-692502" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:44.526753   61616 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:40.640863   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:40.640965   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:40.652319   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:41.140961   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:41.141075   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:41.153051   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:41.640593   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:41.640680   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:41.652828   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:42.139990   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:42.140087   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:42.154770   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:42.640838   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:42.640935   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:42.652440   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:43.140925   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:43.141028   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:43.155802   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:43.640204   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:43.640286   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:43.652441   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:44.139921   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:44.139998   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:44.155888   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:44.640271   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:44.640339   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:44.654530   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:45.140776   61980 api_server.go:166] Checking apiserver status ...
	I1107 23:51:45.140859   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:51:45.153871   61980 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:51:42.473710   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:44.969587   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:41.990014   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:44.488319   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:46.489602   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:46.837380   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:49.337240   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:45.617924   61980 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1107 23:51:45.617969   61980 kubeadm.go:1128] stopping kube-system containers ...
	I1107 23:51:45.618036   61980 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 23:51:45.654193   61980 docker.go:469] Stopping containers: [05b9eba26ffd e2bd652865ba 0426410c702b aa7059b5f5f4 3ad103b1be80 a53b5600b524 ba23fd285520 37f102fdc357 a12dff81ce56 555de320e88b c2befa22c3d9 ca977f6809ae 1eed5b26e677 0b172ce9c4a0 b04b6daa9bb5]
	I1107 23:51:45.654278   61980 ssh_runner.go:195] Run: docker stop 05b9eba26ffd e2bd652865ba 0426410c702b aa7059b5f5f4 3ad103b1be80 a53b5600b524 ba23fd285520 37f102fdc357 a12dff81ce56 555de320e88b c2befa22c3d9 ca977f6809ae 1eed5b26e677 0b172ce9c4a0 b04b6daa9bb5
	I1107 23:51:45.683762   61980 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 23:51:45.700297   61980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:51:45.711964   61980 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:51:45.712036   61980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:51:45.722560   61980 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 23:51:45.722584   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:45.858030   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:46.696166   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:46.953291   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:47.078249   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:47.209201   61980 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:51:47.209286   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:47.231161   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:47.751139   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:48.250782   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:48.750628   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:49.251522   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:49.750876   61980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:49.814234   61980 api_server.go:72] duration metric: took 2.605033068s to wait for apiserver process to appear ...
	I1107 23:51:49.814267   61980 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:51:49.814287   61980 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8444/healthz ...
	I1107 23:51:49.814873   61980 api_server.go:269] stopped: https://192.168.39.88:8444/healthz: Get "https://192.168.39.88:8444/healthz": dial tcp 192.168.39.88:8444: connect: connection refused
	I1107 23:51:49.814914   61980 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8444/healthz ...
	I1107 23:51:49.815432   61980 api_server.go:269] stopped: https://192.168.39.88:8444/healthz: Get "https://192.168.39.88:8444/healthz": dial tcp 192.168.39.88:8444: connect: connection refused
	I1107 23:51:50.316314   61980 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8444/healthz ...
	I1107 23:51:47.022179   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:49.469258   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:48.489884   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:50.990176   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:51.835392   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:54.333412   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:53.511276   61980 api_server.go:279] https://192.168.39.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:51:53.511310   61980 api_server.go:103] status: https://192.168.39.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:51:53.511327   61980 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8444/healthz ...
	I1107 23:51:53.568094   61980 api_server.go:279] https://192.168.39.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:51:53.568129   61980 api_server.go:103] status: https://192.168.39.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:51:53.816492   61980 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8444/healthz ...
	I1107 23:51:53.823911   61980 api_server.go:279] https://192.168.39.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1107 23:51:53.823946   61980 api_server.go:103] status: https://192.168.39.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1107 23:51:54.316258   61980 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8444/healthz ...
	I1107 23:51:54.322682   61980 api_server.go:279] https://192.168.39.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1107 23:51:54.322718   61980 api_server.go:103] status: https://192.168.39.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1107 23:51:54.815822   61980 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8444/healthz ...
	I1107 23:51:54.839937   61980 api_server.go:279] https://192.168.39.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1107 23:51:54.839965   61980 api_server.go:103] status: https://192.168.39.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1107 23:51:55.316356   61980 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8444/healthz ...
	I1107 23:51:55.324122   61980 api_server.go:279] https://192.168.39.88:8444/healthz returned 200:
	ok
	I1107 23:51:55.337405   61980 api_server.go:141] control plane version: v1.28.3
	I1107 23:51:55.337449   61980 api_server.go:131] duration metric: took 5.523172778s to wait for apiserver health ...
	I1107 23:51:55.337462   61980 cni.go:84] Creating CNI manager for ""
	I1107 23:51:55.337485   61980 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 23:51:55.339772   61980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1107 23:51:55.341564   61980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1107 23:51:51.969730   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:53.969792   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:52.997875   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:55.491501   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:55.359850   61980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1107 23:51:55.402507   61980 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:51:55.415221   61980 system_pods.go:59] 8 kube-system pods found
	I1107 23:51:55.415267   61980 system_pods.go:61] "coredns-5dd5756b68-hwlsf" [b3c726ae-4441-484e-8a2a-27ebd67d81d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:51:55.415279   61980 system_pods.go:61] "etcd-default-k8s-diff-port-385734" [43866a78-4af5-46ca-a5e3-df1c8e4bfec4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 23:51:55.415291   61980 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385734" [c843ac0b-fcbe-46b5-9dad-c4807ddfd507] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 23:51:55.415305   61980 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385734" [a9d03161-6d12-46fa-bcf4-d84df219d75d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 23:51:55.415320   61980 system_pods.go:61] "kube-proxy-wl49v" [41dd7c7c-899e-4d04-92a8-fbc2e0caafaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:51:55.415335   61980 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385734" [6b106caf-0a25-44e6-8170-c7a4cabfdf81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 23:51:55.415351   61980 system_pods.go:61] "metrics-server-57f55c9bc5-r22wl" [87ffc2a7-0ef9-4d8a-a7da-1a0e0ed9fe21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:51:55.415364   61980 system_pods.go:61] "storage-provisioner" [1d1147ae-6bc8-45cd-9d8d-2d8a42104db0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:51:55.415377   61980 system_pods.go:74] duration metric: took 12.84363ms to wait for pod list to return data ...
	I1107 23:51:55.415391   61980 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:51:55.421670   61980 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:51:55.421714   61980 node_conditions.go:123] node cpu capacity is 2
	I1107 23:51:55.421727   61980 node_conditions.go:105] duration metric: took 6.328647ms to run NodePressure ...
	I1107 23:51:55.421748   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:51:55.964338   61980 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1107 23:51:55.971534   61980 kubeadm.go:787] kubelet initialised
	I1107 23:51:55.971562   61980 kubeadm.go:788] duration metric: took 7.195139ms waiting for restarted kubelet to initialise ...
	I1107 23:51:55.971572   61980 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:55.980403   61980 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hwlsf" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:55.990868   61980 pod_ready.go:97] node "default-k8s-diff-port-385734" hosting pod "coredns-5dd5756b68-hwlsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:55.990899   61980 pod_ready.go:81] duration metric: took 10.463224ms waiting for pod "coredns-5dd5756b68-hwlsf" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:55.990912   61980 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385734" hosting pod "coredns-5dd5756b68-hwlsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:55.990921   61980 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-385734" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:56.001547   61980 pod_ready.go:97] node "default-k8s-diff-port-385734" hosting pod "etcd-default-k8s-diff-port-385734" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.001579   61980 pod_ready.go:81] duration metric: took 10.647011ms waiting for pod "etcd-default-k8s-diff-port-385734" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:56.001596   61980 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385734" hosting pod "etcd-default-k8s-diff-port-385734" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.001604   61980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-385734" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:56.012359   61980 pod_ready.go:97] node "default-k8s-diff-port-385734" hosting pod "kube-apiserver-default-k8s-diff-port-385734" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.012386   61980 pod_ready.go:81] duration metric: took 10.771919ms waiting for pod "kube-apiserver-default-k8s-diff-port-385734" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:56.012401   61980 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385734" hosting pod "kube-apiserver-default-k8s-diff-port-385734" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.012410   61980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-385734" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:56.025556   61980 pod_ready.go:97] node "default-k8s-diff-port-385734" hosting pod "kube-controller-manager-default-k8s-diff-port-385734" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.025585   61980 pod_ready.go:81] duration metric: took 13.165383ms waiting for pod "kube-controller-manager-default-k8s-diff-port-385734" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:56.025602   61980 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385734" hosting pod "kube-controller-manager-default-k8s-diff-port-385734" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.025610   61980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wl49v" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:56.368560   61980 pod_ready.go:97] node "default-k8s-diff-port-385734" hosting pod "kube-proxy-wl49v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.368602   61980 pod_ready.go:81] duration metric: took 342.98296ms waiting for pod "kube-proxy-wl49v" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:56.368615   61980 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385734" hosting pod "kube-proxy-wl49v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.368623   61980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-385734" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:56.768902   61980 pod_ready.go:97] node "default-k8s-diff-port-385734" hosting pod "kube-scheduler-default-k8s-diff-port-385734" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.768933   61980 pod_ready.go:81] duration metric: took 400.297912ms waiting for pod "kube-scheduler-default-k8s-diff-port-385734" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:56.768950   61980 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385734" hosting pod "kube-scheduler-default-k8s-diff-port-385734" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:56.768959   61980 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-r22wl" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.169846   61980 pod_ready.go:97] node "default-k8s-diff-port-385734" hosting pod "metrics-server-57f55c9bc5-r22wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:57.169872   61980 pod_ready.go:81] duration metric: took 400.905426ms waiting for pod "metrics-server-57f55c9bc5-r22wl" in "kube-system" namespace to be "Ready" ...
	E1107 23:51:57.169886   61980 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385734" hosting pod "metrics-server-57f55c9bc5-r22wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:51:57.169892   61980 pod_ready.go:38] duration metric: took 1.198311487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:57.169908   61980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:51:57.185717   61980 ops.go:34] apiserver oom_adj: -16
	I1107 23:51:57.185758   61980 kubeadm.go:640] restartCluster took 21.591489615s
	I1107 23:51:57.185769   61980 kubeadm.go:406] StartCluster complete in 21.620118061s
	I1107 23:51:57.185791   61980 settings.go:142] acquiring lock: {Name:mkb3bf85efa91260bd7f9666ea4b7d286a4ec4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:57.185876   61980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:51:57.188431   61980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9672/kubeconfig: {Name:mk1460bde29620caf14dc9f78463d79ec8617f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:51:57.188760   61980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:51:57.188921   61980 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:51:57.188986   61980 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-385734"
	I1107 23:51:57.189006   61980 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-385734"
	W1107 23:51:57.189014   61980 addons.go:240] addon storage-provisioner should already be in state true
	I1107 23:51:57.189034   61980 config.go:182] Loaded profile config "default-k8s-diff-port-385734": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:51:57.189053   61980 host.go:66] Checking if "default-k8s-diff-port-385734" exists ...
	I1107 23:51:57.189108   61980 cache.go:107] acquiring lock: {Name:mk2e98e54594103823e5c3f2774763d418478a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:51:57.189175   61980 cache.go:115] /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1107 23:51:57.189184   61980 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 89.778µs
	I1107 23:51:57.189195   61980 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17585-9672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1107 23:51:57.189202   61980 cache.go:87] Successfully saved all images to host disk.
	I1107 23:51:57.189305   61980 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-385734"
	I1107 23:51:57.189344   61980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-385734"
	I1107 23:51:57.189405   61980 config.go:182] Loaded profile config "default-k8s-diff-port-385734": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:51:57.189477   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.189498   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.189751   61980 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-385734"
	I1107 23:51:57.189765   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.189777   61980 addons.go:231] Setting addon dashboard=true in "default-k8s-diff-port-385734"
	I1107 23:51:57.189783   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1107 23:51:57.189786   61980 addons.go:240] addon dashboard should already be in state true
	I1107 23:51:57.189828   61980 host.go:66] Checking if "default-k8s-diff-port-385734" exists ...
	I1107 23:51:57.189853   61980 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-385734"
	I1107 23:51:57.189900   61980 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-385734"
	W1107 23:51:57.189912   61980 addons.go:240] addon metrics-server should already be in state true
	I1107 23:51:57.189933   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.189991   61980 host.go:66] Checking if "default-k8s-diff-port-385734" exists ...
	I1107 23:51:57.189996   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.190169   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.190198   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.190396   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.190474   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.197381   61980 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-385734" context rescaled to 1 replicas
	I1107 23:51:57.197426   61980 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.88 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 23:51:57.200652   61980 out.go:177] * Verifying Kubernetes components...
	I1107 23:51:57.202175   61980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:51:57.211168   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45131
	I1107 23:51:57.211368   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I1107 23:51:57.212029   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.212144   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I1107 23:51:57.212153   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.212883   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.212904   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.212971   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.212974   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.212983   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.213343   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.213428   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.213876   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetState
	I1107 23:51:57.213995   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.214056   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I1107 23:51:57.214063   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.214218   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41207
	I1107 23:51:57.214397   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.214414   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.214461   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.214771   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.214900   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.214909   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.215010   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.215409   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.215446   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.215705   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.215717   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.216179   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.216520   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetState
	I1107 23:51:57.217741   61980 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-385734"
	W1107 23:51:57.217753   61980 addons.go:240] addon default-storageclass should already be in state true
	I1107 23:51:57.217775   61980 host.go:66] Checking if "default-k8s-diff-port-385734" exists ...
	I1107 23:51:57.218103   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.218135   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.218355   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.218517   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.218544   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.218812   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.218831   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.237617   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I1107 23:51:57.238197   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.239372   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.239396   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.239887   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.240332   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetState
	I1107 23:51:57.242714   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:57.245081   61980 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1107 23:51:57.246791   61980 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 23:51:57.246809   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 23:51:57.246833   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:57.245532   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I1107 23:51:57.247924   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.248495   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.248523   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.248995   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.249209   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetState
	I1107 23:51:57.251367   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:57.251437   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.251719   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I1107 23:51:57.253698   61980 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1107 23:51:57.252113   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:57.253747   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.252352   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.252807   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:57.253994   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:57.254393   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.255919   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.255860   61980 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1107 23:51:57.257845   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1107 23:51:57.257864   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1107 23:51:57.256094   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:57.257885   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:57.256353   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.258081   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:57.259069   61980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:51:57.259099   61980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:51:57.261892   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.262410   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:57.262457   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.262669   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:57.262837   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:57.262980   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:57.263093   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:57.273199   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I1107 23:51:57.273814   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.274360   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.274377   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.274828   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.274938   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:57.275078   61980 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 23:51:57.275102   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:57.277466   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I1107 23:51:57.277807   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.278370   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.278387   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.278759   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.279069   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetState
	I1107 23:51:57.279071   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.279349   61980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I1107 23:51:57.279534   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:57.279561   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.279744   61980 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:51:57.280178   61980 main.go:141] libmachine: Using API Version  1
	I1107 23:51:57.280196   61980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:51:57.280351   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:57.280522   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:57.280581   61980 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:51:57.280729   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:57.280789   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetState
	I1107 23:51:57.280835   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:57.280961   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:57.283068   61980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:51:57.282571   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .DriverName
	I1107 23:51:56.468801   61281 pod_ready.go:102] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:57.470394   61281 pod_ready.go:92] pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:57.470420   61281 pod_ready.go:81] duration metric: took 32.279726533s waiting for pod "coredns-5644d7b6d9-bpf97" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.470433   61281 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.477169   61281 pod_ready.go:92] pod "etcd-old-k8s-version-729146" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:57.477200   61281 pod_ready.go:81] duration metric: took 6.757535ms waiting for pod "etcd-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.477215   61281 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.483074   61281 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-729146" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:57.483104   61281 pod_ready.go:81] duration metric: took 5.879739ms waiting for pod "kube-apiserver-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.483124   61281 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.488832   61281 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-729146" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:57.488861   61281 pod_ready.go:81] duration metric: took 5.727856ms waiting for pod "kube-controller-manager-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.488875   61281 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t2qc9" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.495716   61281 pod_ready.go:92] pod "kube-proxy-t2qc9" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:57.495741   61281 pod_ready.go:81] duration metric: took 6.858186ms waiting for pod "kube-proxy-t2qc9" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.495753   61281 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.867699   61281 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-729146" in "kube-system" namespace has status "Ready":"True"
	I1107 23:51:57.867729   61281 pod_ready.go:81] duration metric: took 371.96836ms waiting for pod "kube-scheduler-old-k8s-version-729146" in "kube-system" namespace to be "Ready" ...
	I1107 23:51:57.867744   61281 pod_ready.go:38] duration metric: took 32.690825398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:51:57.867766   61281 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:51:57.867824   61281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:51:57.889522   61281 api_server.go:72] duration metric: took 33.040094231s to wait for apiserver process to appear ...
	I1107 23:51:57.889551   61281 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:51:57.889573   61281 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I1107 23:51:57.897845   61281 api_server.go:279] https://192.168.61.191:8443/healthz returned 200:
	ok
	I1107 23:51:57.898963   61281 api_server.go:141] control plane version: v1.16.0
	I1107 23:51:57.898988   61281 api_server.go:131] duration metric: took 9.43029ms to wait for apiserver health ...
	I1107 23:51:57.898998   61281 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:51:58.069067   61281 system_pods.go:59] 8 kube-system pods found
	I1107 23:51:58.069100   61281 system_pods.go:61] "coredns-5644d7b6d9-bpf97" [edadb693-65cd-4556-9337-a6afbb3ac4d1] Running
	I1107 23:51:58.069108   61281 system_pods.go:61] "etcd-old-k8s-version-729146" [8e4624db-7b54-4a17-8291-d24a2e16c0f7] Running
	I1107 23:51:58.069115   61281 system_pods.go:61] "kube-apiserver-old-k8s-version-729146" [7d2ddf54-7126-4a04-908a-de45bf368c20] Running
	I1107 23:51:58.069123   61281 system_pods.go:61] "kube-controller-manager-old-k8s-version-729146" [de4e9d97-f1cd-4fd7-a593-24715a610529] Running
	I1107 23:51:58.069129   61281 system_pods.go:61] "kube-proxy-t2qc9" [b0cd0440-9e09-4cc9-86ba-73073144929c] Running
	I1107 23:51:58.069136   61281 system_pods.go:61] "kube-scheduler-old-k8s-version-729146" [67d7ed93-087a-43ea-a363-5409cfc9afb2] Running
	I1107 23:51:58.069146   61281 system_pods.go:61] "metrics-server-74d5856cc6-dngxl" [945fb950-2668-4ed1-a694-8bcd76250fde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:51:58.069155   61281 system_pods.go:61] "storage-provisioner" [5bf7dc21-570e-4b93-9a0c-be49a6a60a4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:51:58.069165   61281 system_pods.go:74] duration metric: took 170.160074ms to wait for pod list to return data ...
	I1107 23:51:58.069178   61281 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:51:58.266258   61281 default_sa.go:45] found service account: "default"
	I1107 23:51:58.266294   61281 default_sa.go:55] duration metric: took 197.107277ms for default service account to be created ...
	I1107 23:51:58.266306   61281 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:51:58.469859   61281 system_pods.go:86] 8 kube-system pods found
	I1107 23:51:58.469899   61281 system_pods.go:89] "coredns-5644d7b6d9-bpf97" [edadb693-65cd-4556-9337-a6afbb3ac4d1] Running
	I1107 23:51:58.469907   61281 system_pods.go:89] "etcd-old-k8s-version-729146" [8e4624db-7b54-4a17-8291-d24a2e16c0f7] Running
	I1107 23:51:58.469915   61281 system_pods.go:89] "kube-apiserver-old-k8s-version-729146" [7d2ddf54-7126-4a04-908a-de45bf368c20] Running
	I1107 23:51:58.469924   61281 system_pods.go:89] "kube-controller-manager-old-k8s-version-729146" [de4e9d97-f1cd-4fd7-a593-24715a610529] Running
	I1107 23:51:58.469931   61281 system_pods.go:89] "kube-proxy-t2qc9" [b0cd0440-9e09-4cc9-86ba-73073144929c] Running
	I1107 23:51:58.469939   61281 system_pods.go:89] "kube-scheduler-old-k8s-version-729146" [67d7ed93-087a-43ea-a363-5409cfc9afb2] Running
	I1107 23:51:58.469963   61281 system_pods.go:89] "metrics-server-74d5856cc6-dngxl" [945fb950-2668-4ed1-a694-8bcd76250fde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:51:58.469984   61281 system_pods.go:89] "storage-provisioner" [5bf7dc21-570e-4b93-9a0c-be49a6a60a4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:51:58.469998   61281 system_pods.go:126] duration metric: took 203.683602ms to wait for k8s-apps to be running ...
	I1107 23:51:58.470014   61281 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:51:58.470075   61281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:51:58.490650   61281 system_svc.go:56] duration metric: took 20.626133ms WaitForService to wait for kubelet.
	I1107 23:51:58.490679   61281 kubeadm.go:581] duration metric: took 33.641260424s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:51:58.490704   61281 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:51:58.665804   61281 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:51:58.665842   61281 node_conditions.go:123] node cpu capacity is 2
	I1107 23:51:58.665857   61281 node_conditions.go:105] duration metric: took 175.146938ms to run NodePressure ...
	I1107 23:51:58.665873   61281 start.go:228] waiting for startup goroutines ...
	I1107 23:51:58.665883   61281 start.go:233] waiting for cluster config update ...
	I1107 23:51:58.665895   61281 start.go:242] writing updated cluster config ...
	I1107 23:51:58.666266   61281 ssh_runner.go:195] Run: rm -f paused
	I1107 23:51:58.728894   61281 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1107 23:51:58.731078   61281 out.go:177] 
	W1107 23:51:58.732765   61281 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1107 23:51:58.734359   61281 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1107 23:51:58.735884   61281 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-729146" cluster and "default" namespace by default
	I1107 23:51:56.335260   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:58.854308   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:57.284733   61980 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:51:57.284749   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:51:57.284763   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:57.288813   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.289455   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:57.289499   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.289618   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:57.289832   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:57.289999   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:57.290167   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:57.293667   61980 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:51:57.293685   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:51:57.293702   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHHostname
	I1107 23:51:57.297091   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.297402   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:5e:7b", ip: ""} in network mk-default-k8s-diff-port-385734: {Iface:virbr1 ExpiryTime:2023-11-08 00:51:23 +0000 UTC Type:0 Mac:52:54:00:35:5e:7b Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:default-k8s-diff-port-385734 Clientid:01:52:54:00:35:5e:7b}
	I1107 23:51:57.297444   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | domain default-k8s-diff-port-385734 has defined IP address 192.168.39.88 and MAC address 52:54:00:35:5e:7b in network mk-default-k8s-diff-port-385734
	I1107 23:51:57.297666   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHPort
	I1107 23:51:57.297856   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHKeyPath
	I1107 23:51:57.298086   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .GetSSHUsername
	I1107 23:51:57.298202   61980 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/default-k8s-diff-port-385734/id_rsa Username:docker}
	I1107 23:51:57.452181   61980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:51:57.455345   61980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:51:57.485434   61980 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 23:51:57.485456   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1107 23:51:57.515293   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1107 23:51:57.515318   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1107 23:51:57.557009   61980 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 23:51:57.557043   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 23:51:57.578138   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1107 23:51:57.578166   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1107 23:51:57.640941   61980 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:51:57.640967   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 23:51:57.699556   61980 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-385734" to be "Ready" ...
	I1107 23:51:57.699655   61980 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 23:51:57.699688   61980 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:51:57.699568   61980 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 23:51:57.699717   61980 cache_images.go:262] succeeded pushing to: default-k8s-diff-port-385734
	I1107 23:51:57.699725   61980 cache_images.go:263] failed pushing to: 
	I1107 23:51:57.699749   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:57.699764   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:57.700168   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:57.700187   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:57.700198   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:57.700202   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Closing plugin on server side
	I1107 23:51:57.700208   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:57.700490   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:57.700549   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:57.700518   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Closing plugin on server side
	I1107 23:51:57.731855   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1107 23:51:57.731882   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1107 23:51:57.759160   61980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:51:57.848079   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1107 23:51:57.848117   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1107 23:51:58.036314   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1107 23:51:58.036362   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1107 23:51:58.140883   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1107 23:51:58.140927   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1107 23:51:58.226525   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1107 23:51:58.226558   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1107 23:51:58.248354   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1107 23:51:58.248380   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1107 23:51:58.271994   61980 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 23:51:58.272019   61980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1107 23:51:58.294203   61980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 23:51:59.452201   61980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.99679172s)
	I1107 23:51:59.452255   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:59.452268   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:59.452275   61980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.00005563s)
	I1107 23:51:59.452314   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:59.452328   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:59.452559   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Closing plugin on server side
	I1107 23:51:59.452603   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:59.452622   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:59.452635   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:59.452645   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:59.452667   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:59.452679   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:59.452689   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:59.452699   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:59.454614   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Closing plugin on server side
	I1107 23:51:59.454660   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:59.454668   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:59.454667   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Closing plugin on server side
	I1107 23:51:59.454684   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:59.454689   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:59.463228   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:59.463253   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:59.463487   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:59.463510   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:59.589031   61980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.829824594s)
	I1107 23:51:59.589086   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:59.589102   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:59.589432   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Closing plugin on server side
	I1107 23:51:59.589462   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:59.589478   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:59.589495   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:51:59.589507   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:51:59.589762   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:51:59.589783   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:51:59.589794   61980 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-385734"
	I1107 23:51:59.712695   61980 node_ready.go:58] node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:52:00.095236   61980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.8009715s)
	I1107 23:52:00.095296   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:52:00.095324   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:52:00.095675   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Closing plugin on server side
	I1107 23:52:00.095736   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:52:00.095750   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:52:00.095767   61980 main.go:141] libmachine: Making call to close driver server
	I1107 23:52:00.095781   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) Calling .Close
	I1107 23:52:00.097182   61980 main.go:141] libmachine: (default-k8s-diff-port-385734) DBG | Closing plugin on server side
	I1107 23:52:00.097211   61980 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:52:00.097221   61980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:52:00.099293   61980 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-385734 addons enable metrics-server	
	
	
	I1107 23:52:00.101025   61980 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1107 23:52:00.102484   61980 addons.go:502] enable addons completed in 2.913561444s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1107 23:51:57.494836   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:51:59.988577   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:52:01.334083   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:52:03.334324   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:52:01.713526   61980 node_ready.go:58] node "default-k8s-diff-port-385734" has status "Ready":"False"
	I1107 23:52:04.211808   61980 node_ready.go:49] node "default-k8s-diff-port-385734" has status "Ready":"True"
	I1107 23:52:04.211837   61980 node_ready.go:38] duration metric: took 6.512245117s waiting for node "default-k8s-diff-port-385734" to be "Ready" ...
	I1107 23:52:04.211848   61980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:52:04.219344   61980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hwlsf" in "kube-system" namespace to be "Ready" ...
	I1107 23:52:02.489380   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:52:04.992767   61089 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kk6q9" in "kube-system" namespace has status "Ready":"False"
	I1107 23:52:05.834703   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	I1107 23:52:08.335263   61616 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b9wv4" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-11-07 23:50:40 UTC, ends at Tue 2023-11-07 23:52:10 UTC. --
	Nov 07 23:51:52 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:52.582747108Z" level=warning msg="cleaning up after shim disconnected" id=eb58e2586cfb201784dcd1a8ff44c68bda7661da8a26f0a07f287a9fd7a0b369 namespace=moby
	Nov 07 23:51:52 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:52.582872042Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 07 23:51:53 old-k8s-version-729146 dockerd[1079]: time="2023-11-07T23:51:53.574533881Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 07 23:51:53 old-k8s-version-729146 dockerd[1079]: time="2023-11-07T23:51:53.574776136Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 07 23:51:53 old-k8s-version-729146 dockerd[1079]: time="2023-11-07T23:51:53.580234819Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 07 23:51:53 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:53.706126471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 07 23:51:53 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:53.706260919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 07 23:51:53 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:53.706278009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 07 23:51:53 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:53.706287719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 07 23:51:54 old-k8s-version-729146 dockerd[1079]: time="2023-11-07T23:51:54.142790195Z" level=info msg="ignoring event" container=1bc6add12aaf75693610e3c685cef29edbd85a26ce8c27e8dd963e7fa6065bff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 23:51:54 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:54.144538391Z" level=info msg="shim disconnected" id=1bc6add12aaf75693610e3c685cef29edbd85a26ce8c27e8dd963e7fa6065bff namespace=moby
	Nov 07 23:51:54 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:54.144627791Z" level=warning msg="cleaning up after shim disconnected" id=1bc6add12aaf75693610e3c685cef29edbd85a26ce8c27e8dd963e7fa6065bff namespace=moby
	Nov 07 23:51:54 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:54.144645956Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 07 23:51:55 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:55.292419026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 07 23:51:55 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:55.292574272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 07 23:51:55 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:55.292603277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 07 23:51:55 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:55.292617585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 07 23:51:55 old-k8s-version-729146 dockerd[1079]: time="2023-11-07T23:51:55.759556044Z" level=info msg="ignoring event" container=3647521315d7cafeb2b49e09ac2a7489a190519893ca39e3180ec253d68a43a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 23:51:55 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:55.760460926Z" level=info msg="shim disconnected" id=3647521315d7cafeb2b49e09ac2a7489a190519893ca39e3180ec253d68a43a0 namespace=moby
	Nov 07 23:51:55 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:55.760523743Z" level=warning msg="cleaning up after shim disconnected" id=3647521315d7cafeb2b49e09ac2a7489a190519893ca39e3180ec253d68a43a0 namespace=moby
	Nov 07 23:51:55 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:51:55.760539139Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 07 23:52:08 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:52:08.203124567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 07 23:52:08 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:52:08.203793546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 07 23:52:08 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:52:08.203933776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 07 23:52:08 old-k8s-version-729146 dockerd[1085]: time="2023-11-07T23:52:08.204066280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                         COMMAND                  CREATED          STATUS                            PORTS     NAMES
	647cac5e521d   6e38f40d628d                  "/storage-provisioner"   2 seconds ago    Up 2 seconds                                k8s_storage-provisioner_storage-provisioner_kube-system_5bf7dc21-570e-4b93-9a0c-be49a6a60a4d_2
	3647521315d7   a90209bb39e3                  "nginx -g 'daemon of…"   15 seconds ago   Exited (1) 14 seconds ago                   k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-rzpj7_kubernetes-dashboard_4833d19c-bb22-4e97-86ee-a047eaa00097_1
	eb88789cf599   kubernetesui/dashboard        "/dashboard --insecu…"   24 seconds ago   Up 23 seconds                               k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-f58bn_kubernetes-dashboard_189030c6-9969-444e-bc27-4e112455c491_0
	4a05510e20a6   k8s.gcr.io/pause:3.1          "/pause"                 33 seconds ago   Up 32 seconds                               k8s_POD_kubernetes-dashboard-84b68f675b-f58bn_kubernetes-dashboard_189030c6-9969-444e-bc27-4e112455c491_0
	928586e7186f   k8s.gcr.io/pause:3.1          "/pause"                 33 seconds ago   Up 31 seconds                               k8s_POD_dashboard-metrics-scraper-d6b4b5544-rzpj7_kubernetes-dashboard_4833d19c-bb22-4e97-86ee-a047eaa00097_0
	c236b5dfb821   k8s.gcr.io/pause:3.1          "/pause"                 33 seconds ago   Up 32 seconds                               k8s_POD_metrics-server-74d5856cc6-dngxl_kube-system_945fb950-2668-4ed1-a694-8bcd76250fde_0
	66d8ff54733a   56cc512116c8                  "sleep 3600"             47 seconds ago   Up 47 seconds                               k8s_busybox_busybox_default_eb93a790-81ed-4b23-9d67-a30a387496f2_1
	e9debebfef1f   c21b0c7400f9                  "/usr/local/bin/kube…"   48 seconds ago   Up 47 seconds                               k8s_kube-proxy_kube-proxy-t2qc9_kube-system_b0cd0440-9e09-4cc9-86ba-73073144929c_1
	97907bb1e549   bf261d157914                  "/coredns -conf /etc…"   48 seconds ago   Up 47 seconds                               k8s_coredns_coredns-5644d7b6d9-bpf97_kube-system_edadb693-65cd-4556-9337-a6afbb3ac4d1_1
	0276cf6c543f   k8s.gcr.io/pause:3.1          "/pause"                 48 seconds ago   Up 47 seconds                               k8s_POD_kube-proxy-t2qc9_kube-system_b0cd0440-9e09-4cc9-86ba-73073144929c_1
	2bd3499c9c5a   k8s.gcr.io/pause:3.1          "/pause"                 48 seconds ago   Up 47 seconds                               k8s_POD_busybox_default_eb93a790-81ed-4b23-9d67-a30a387496f2_1
	eb58e2586cfb   6e38f40d628d                  "/storage-provisioner"   48 seconds ago   Exited (1) 18 seconds ago                   k8s_storage-provisioner_storage-provisioner_kube-system_5bf7dc21-570e-4b93-9a0c-be49a6a60a4d_1
	370f8c977cbd   k8s.gcr.io/pause:3.1          "/pause"                 48 seconds ago   Up 47 seconds                               k8s_POD_coredns-5644d7b6d9-bpf97_kube-system_edadb693-65cd-4556-9337-a6afbb3ac4d1_1
	380e49fbce7b   k8s.gcr.io/pause:3.1          "/pause"                 49 seconds ago   Up 48 seconds                               k8s_POD_storage-provisioner_kube-system_5bf7dc21-570e-4b93-9a0c-be49a6a60a4d_1
	f0d7637b959e   06a629a7e51c                  "kube-controller-man…"   56 seconds ago   Up 55 seconds                               k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-729146_kube-system_b39706a67360d65bfa3cf2560791efe9_0
	431c1f869b4b   301ddc62b80b                  "kube-scheduler --au…"   56 seconds ago   Up 55 seconds                               k8s_kube-scheduler_kube-scheduler-old-k8s-version-729146_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_1
	543654d61516   b305571ca60a                  "kube-apiserver --ad…"   56 seconds ago   Up 55 seconds                               k8s_kube-apiserver_kube-apiserver-old-k8s-version-729146_kube-system_51c74bc362e8fabd1374d80e31b15eca_1
	bf41b8022469   b2756210eeab                  "etcd --advertise-cl…"   56 seconds ago   Up 55 seconds                               k8s_etcd_etcd-old-k8s-version-729146_kube-system_007fb62bd1e192e981a0ff9cfbd941d1_1
	5a85a3b058c1   k8s.gcr.io/pause:3.1          "/pause"                 57 seconds ago   Up 56 seconds                               k8s_POD_kube-scheduler-old-k8s-version-729146_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_1
	169ebbfbeee7   k8s.gcr.io/pause:3.1          "/pause"                 57 seconds ago   Up 56 seconds                               k8s_POD_kube-controller-manager-old-k8s-version-729146_kube-system_b39706a67360d65bfa3cf2560791efe9_0
	016735a6a4e1   k8s.gcr.io/pause:3.1          "/pause"                 57 seconds ago   Up 56 seconds                               k8s_POD_kube-apiserver-old-k8s-version-729146_kube-system_51c74bc362e8fabd1374d80e31b15eca_1
	666e6ce7dab7   k8s.gcr.io/pause:3.1          "/pause"                 57 seconds ago   Up 56 seconds                               k8s_POD_etcd-old-k8s-version-729146_kube-system_007fb62bd1e192e981a0ff9cfbd941d1_1
	a627d17fcaf8   gcr.io/k8s-minikube/busybox   "sleep 3600"             2 minutes ago    Exited (137) About a minute ago             k8s_busybox_busybox_default_eb93a790-81ed-4b23-9d67-a30a387496f2_0
	4ba1149d4f8d   k8s.gcr.io/pause:3.1          "/pause"                 2 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_busybox_default_eb93a790-81ed-4b23-9d67-a30a387496f2_0
	3336323861f2   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_storage-provisioner_kube-system_5bf7dc21-570e-4b93-9a0c-be49a6a60a4d_0
	e0850594ac17   bf261d157914                  "/coredns -conf /etc…"   3 minutes ago    Exited (0) 2 minutes ago                    k8s_coredns_coredns-5644d7b6d9-bpf97_kube-system_edadb693-65cd-4556-9337-a6afbb3ac4d1_0
	dd434c70f1a4   c21b0c7400f9                  "/usr/local/bin/kube…"   3 minutes ago    Exited (2) 2 minutes ago                    k8s_kube-proxy_kube-proxy-t2qc9_kube-system_b0cd0440-9e09-4cc9-86ba-73073144929c_0
	14ae9921b78f   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_coredns-5644d7b6d9-bpf97_kube-system_edadb693-65cd-4556-9337-a6afbb3ac4d1_0
	daca8e16f339   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_kube-proxy-t2qc9_kube-system_b0cd0440-9e09-4cc9-86ba-73073144929c_0
	70c344c74bd2   301ddc62b80b                  "kube-scheduler --au…"   3 minutes ago    Exited (2) 2 minutes ago                    k8s_kube-scheduler_kube-scheduler-old-k8s-version-729146_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	381c9febe570   06a629a7e51c                  "kube-controller-man…"   3 minutes ago    Exited (2) 2 minutes ago                    k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-729146_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	5add8a846fd7   b2756210eeab                  "etcd --advertise-cl…"   3 minutes ago    Exited (0) 2 minutes ago                    k8s_etcd_etcd-old-k8s-version-729146_kube-system_007fb62bd1e192e981a0ff9cfbd941d1_0
	ddd4cce1319b   b305571ca60a                  "kube-apiserver --ad…"   3 minutes ago    Exited (0) 2 minutes ago                    k8s_kube-apiserver_kube-apiserver-old-k8s-version-729146_kube-system_51c74bc362e8fabd1374d80e31b15eca_0
	e5d102a1a6a0   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_kube-scheduler-old-k8s-version-729146_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	6ea3e3345d15   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_kube-controller-manager-old-k8s-version-729146_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	8f3433f73cab   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_kube-apiserver-old-k8s-version-729146_kube-system_51c74bc362e8fabd1374d80e31b15eca_0
	732d9b32516d   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_etcd-old-k8s-version-729146_kube-system_007fb62bd1e192e981a0ff9cfbd941d1_0
	time="2023-11-07T23:52:10Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [97907bb1e549] <==
	* 2023-11-07T23:51:28.039Z [INFO] CoreDNS-1.6.2
	2023-11-07T23:51:28.040Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-11-07T23:51:29.060Z [INFO] 127.0.0.1:40483 - 11983 "HINFO IN 1836024289148738724.2530641481911967456. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020363348s
	2023-11-07T23:51:37.385Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-11-07T23:51:47.386Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	I1107 23:51:53.040064       1 trace.go:82] Trace[577636193]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-07 23:51:23.039017754 +0000 UTC m=+0.025449014) (total time: 30.001021671s):
	Trace[577636193]: [30.001021671s] [30.001021671s] END
	E1107 23:51:53.040108       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040108       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040108       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1107 23:51:53.040063       1 trace.go:82] Trace[1593684171]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-07 23:51:23.039614751 +0000 UTC m=+0.026046026) (total time: 30.000410704s):
	Trace[1593684171]: [30.000410704s] [30.000410704s] END
	E1107 23:51:53.040129       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040129       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040129       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1107 23:51:53.040247       1 trace.go:82] Trace[644956139]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-07 23:51:23.039420714 +0000 UTC m=+0.025852033) (total time: 30.000813184s):
	Trace[644956139]: [30.000813184s] [30.000813184s] END
	E1107 23:51:53.040254       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040254       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040254       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040108       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040129       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:51:53.040254       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [e0850594ac17] <==
	* E1107 23:49:14.896592       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896593       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896678       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	linux/amd64, go1.12.8, 795a3eb
	2023-11-07T23:48:56.426Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-11-07T23:49:06.426Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	I1107 23:49:14.895872       1 trace.go:82] Trace[843189285]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-07 23:48:44.894142302 +0000 UTC m=+0.021776386) (total time: 30.001669711s):
	Trace[843189285]: [30.001669711s] [30.001669711s] END
	I1107 23:49:14.896551       1 trace.go:82] Trace[1899900081]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-07 23:48:44.894974713 +0000 UTC m=+0.022608794) (total time: 30.00155642s):
	Trace[1899900081]: [30.00155642s] [30.00155642s] END
	E1107 23:49:14.896592       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896592       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896592       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896593       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896593       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896593       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1107 23:49:14.896321       1 trace.go:82] Trace[77211519]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-07 23:48:44.89546238 +0000 UTC m=+0.023096470) (total time: 30.000824749s):
	Trace[77211519]: [30.000824749s] [30.000824749s] END
	E1107 23:49:14.896678       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896678       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1107 23:49:14.896678       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2023-11-07T23:49:23.782Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-729146
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-729146
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=old-k8s-version-729146
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_48_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:48:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:51:20 +0000   Tue, 07 Nov 2023 23:48:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:51:20 +0000   Tue, 07 Nov 2023 23:48:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:51:20 +0000   Tue, 07 Nov 2023 23:48:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:51:20 +0000   Tue, 07 Nov 2023 23:48:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.191
	  Hostname:    old-k8s-version-729146
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 213970931b4343dcb424d0252f257c33
	 System UUID:                21397093-1b43-43dc-b424-d0252f257c33
	 Boot ID:                    c474b1bf-e0ab-4cfd-b68e-42ca6f3b368d
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.7
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (11 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                coredns-5644d7b6d9-bpf97                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m27s
	  kube-system                etcd-old-k8s-version-729146                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                kube-apiserver-old-k8s-version-729146             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                kube-controller-manager-old-k8s-version-729146    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                kube-proxy-t2qc9                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                kube-scheduler-old-k8s-version-729146             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                metrics-server-74d5856cc6-dngxl                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         33s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-rzpj7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-f58bn             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m55s)  kubelet, old-k8s-version-729146     Node old-k8s-version-729146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x7 over 3m55s)  kubelet, old-k8s-version-729146     Node old-k8s-version-729146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x8 over 3m55s)  kubelet, old-k8s-version-729146     Node old-k8s-version-729146 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m25s                  kube-proxy, old-k8s-version-729146  Starting kube-proxy.
	  Normal  Starting                 57s                    kubelet, old-k8s-version-729146     Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)      kubelet, old-k8s-version-729146     Node old-k8s-version-729146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x7 over 57s)      kubelet, old-k8s-version-729146     Node old-k8s-version-729146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)      kubelet, old-k8s-version-729146     Node old-k8s-version-729146 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  57s                    kubelet, old-k8s-version-729146     Updated Node Allocatable limit across pods
	  Normal  Starting                 47s                    kube-proxy, old-k8s-version-729146  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov 7 23:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064716] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.508185] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.833128] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140417] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.397569] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.947468] systemd-fstab-generator[510]: Ignoring "noauto" for root device
	[  +0.122500] systemd-fstab-generator[521]: Ignoring "noauto" for root device
	[  +1.232067] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.359709] systemd-fstab-generator[826]: Ignoring "noauto" for root device
	[  +0.134618] systemd-fstab-generator[837]: Ignoring "noauto" for root device
	[  +0.138877] systemd-fstab-generator[850]: Ignoring "noauto" for root device
	[  +6.469644] systemd-fstab-generator[1070]: Ignoring "noauto" for root device
	[  +3.122906] kauditd_printk_skb: 67 callbacks suppressed
	[Nov 7 23:51] systemd-fstab-generator[1485]: Ignoring "noauto" for root device
	[  +0.567311] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.232078] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.020868] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [5add8a846fd7] <==
	* 2023-11-07 23:48:19.736851 I | raft: 9f3c15d83e33cf49 is starting a new election at term 1
	2023-11-07 23:48:19.736914 I | raft: 9f3c15d83e33cf49 became candidate at term 2
	2023-11-07 23:48:19.736932 I | raft: 9f3c15d83e33cf49 received MsgVoteResp from 9f3c15d83e33cf49 at term 2
	2023-11-07 23:48:19.736944 I | raft: 9f3c15d83e33cf49 became leader at term 2
	2023-11-07 23:48:19.736952 I | raft: raft.node: 9f3c15d83e33cf49 elected leader 9f3c15d83e33cf49 at term 2
	2023-11-07 23:48:19.737442 I | etcdserver: published {Name:old-k8s-version-729146 ClientURLs:[https://192.168.61.191:2379]} to cluster fb9e9282e2104331
	2023-11-07 23:48:19.737455 I | embed: ready to serve client requests
	2023-11-07 23:48:19.738446 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-07 23:48:19.739219 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-07 23:48:19.739755 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-07 23:48:19.740131 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-07 23:48:19.745744 I | embed: ready to serve client requests
	2023-11-07 23:48:19.750803 I | embed: serving client requests on 192.168.61.191:2379
	2023-11-07 23:48:27.724797 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" " with result "range_response_count:1 size:218" took too long (219.648849ms) to execute
	2023-11-07 23:48:27.725611 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-729146.17957c1c6cad24b4\" " with result "range_response_count:1 size:455" took too long (233.03737ms) to execute
	2023-11-07 23:48:43.084360 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (233.093538ms) to execute
	2023-11-07 23:48:43.084938 W | etcdserver: request "header:<ID:14936623210804250423 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-system/default-token-86f9c\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/default-token-86f9c\" value_size:2353 >> failure:<>>" with result "size:16" took too long (139.810853ms) to execute
	2023-11-07 23:48:45.495803 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-bgqhf\" " with result "range_response_count:1 size:1700" took too long (153.682621ms) to execute
	2023-11-07 23:48:45.496401 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" " with result "range_response_count:1 size:636" took too long (260.313548ms) to execute
	2023-11-07 23:49:26.656486 W | etcdserver: request "header:<ID:14936623210804250829 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.191\" mod_revision:414 > success:<request_put:<key:\"/registry/masterleases/192.168.61.191\" value_size:69 lease:5713251173949475019 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.191\" > >>" with result "size:16" took too long (134.549149ms) to execute
	2023-11-07 23:49:27.043742 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (238.867617ms) to execute
	2023-11-07 23:49:27.344163 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (182.413559ms) to execute
	2023-11-07 23:49:27.344555 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "range_response_count:0 size:7" took too long (159.102358ms) to execute
	2023-11-07 23:50:03.568990 N | pkg/osutil: received terminated signal, shutting down...
	2023-11-07 23:50:03.581845 I | etcdserver: skipped leadership transfer for single member cluster
	
	* 
	* ==> etcd [bf41b8022469] <==
	* 2023-11-07 23:51:14.879053 I | etcdserver: election = 1000ms
	2023-11-07 23:51:14.879056 I | etcdserver: snapshot count = 10000
	2023-11-07 23:51:14.879064 I | etcdserver: advertise client URLs = https://192.168.61.191:2379
	2023-11-07 23:51:14.885808 I | etcdserver: restarting member 9f3c15d83e33cf49 in cluster fb9e9282e2104331 at commit index 530
	2023-11-07 23:51:14.885864 I | raft: 9f3c15d83e33cf49 became follower at term 2
	2023-11-07 23:51:14.885871 I | raft: newRaft 9f3c15d83e33cf49 [peers: [], term: 2, commit: 530, applied: 0, lastindex: 530, lastterm: 2]
	2023-11-07 23:51:14.896973 W | auth: simple token is not cryptographically signed
	2023-11-07 23:51:14.900170 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-07 23:51:14.901763 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-07 23:51:14.901860 I | embed: listening for metrics on http://192.168.61.191:2381
	2023-11-07 23:51:14.902485 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-07 23:51:14.902656 I | etcdserver/membership: added member 9f3c15d83e33cf49 [https://192.168.61.191:2380] to cluster fb9e9282e2104331
	2023-11-07 23:51:14.902831 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-07 23:51:14.902868 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-07 23:51:16.386364 I | raft: 9f3c15d83e33cf49 is starting a new election at term 2
	2023-11-07 23:51:16.386522 I | raft: 9f3c15d83e33cf49 became candidate at term 3
	2023-11-07 23:51:16.386650 I | raft: 9f3c15d83e33cf49 received MsgVoteResp from 9f3c15d83e33cf49 at term 3
	2023-11-07 23:51:16.386767 I | raft: 9f3c15d83e33cf49 became leader at term 3
	2023-11-07 23:51:16.386853 I | raft: raft.node: 9f3c15d83e33cf49 elected leader 9f3c15d83e33cf49 at term 3
	2023-11-07 23:51:16.388787 I | etcdserver: published {Name:old-k8s-version-729146 ClientURLs:[https://192.168.61.191:2379]} to cluster fb9e9282e2104331
	2023-11-07 23:51:16.389087 I | embed: ready to serve client requests
	2023-11-07 23:51:16.390809 I | embed: serving client requests on 192.168.61.191:2379
	2023-11-07 23:51:16.391017 I | embed: ready to serve client requests
	2023-11-07 23:51:16.392643 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-07 23:51:46.230147 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-bpf97\" " with result "range_response_count:1 size:2004" took too long (267.654539ms) to execute
	
	* 
	* ==> kernel <==
	*  23:52:10 up 1 min,  0 users,  load average: 1.29, 0.45, 0.16
	Linux old-k8s-version-729146 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [543654d61516] <==
	* I1107 23:51:20.025615       1 establishing_controller.go:73] Starting EstablishingController
	I1107 23:51:20.025643       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I1107 23:51:20.025662       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1107 23:51:20.055470       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:51:20.055865       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I1107 23:51:20.126537       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 23:51:20.144957       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 23:51:20.154660       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:51:20.925188       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I1107 23:51:20.925480       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1107 23:51:20.925630       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 23:51:20.939162       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1107 23:51:21.676211       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1107 23:51:21.709833       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1107 23:51:21.770373       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1107 23:51:21.804139       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1107 23:51:21.804215       1 handler_proxy.go:99] no RequestInfo found in the context
	E1107 23:51:21.804308       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1107 23:51:21.804320       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1107 23:51:21.810568       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:51:21.823977       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 23:51:37.144379       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1107 23:51:37.168652       1 controller.go:606] quota admission added evaluator for: endpoints
	I1107 23:51:37.246305       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-apiserver [ddd4cce1319b] <==
	* I1107 23:48:26.169026       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1107 23:48:26.505878       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.61.191]
	I1107 23:48:26.507375       1 controller.go:606] quota admission added evaluator for: endpoints
	I1107 23:48:26.601572       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:48:27.410062       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1107 23:48:27.903363       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1107 23:48:28.112803       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1107 23:48:42.806999       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1107 23:48:43.027290       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1107 23:48:43.137894       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1107 23:50:03.510756       1 controller.go:182] Shutting down kubernetes service endpoint reconciler
	I1107 23:50:03.511447       1 available_controller.go:395] Shutting down AvailableConditionController
	I1107 23:50:03.511479       1 controller.go:122] Shutting down OpenAPI controller
	I1107 23:50:03.511586       1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1107 23:50:03.511604       1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
	I1107 23:50:03.511621       1 establishing_controller.go:84] Shutting down EstablishingController
	I1107 23:50:03.511726       1 naming_controller.go:299] Shutting down NamingConditionController
	I1107 23:50:03.511753       1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
	I1107 23:50:03.511775       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I1107 23:50:03.512782       1 autoregister_controller.go:164] Shutting down autoregister controller
	I1107 23:50:03.512826       1 crd_finalizer.go:286] Shutting down CRDFinalizer
	I1107 23:50:03.512836       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
	I1107 23:50:03.513006       1 controller.go:87] Shutting down OpenAPI AggregationController
	I1107 23:50:03.530503       1 secure_serving.go:167] Stopped listening on [::]:8443
	E1107 23:50:03.540044       1 controller.go:185] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [381c9febe570] <==
	* I1107 23:48:42.996945       1 shared_informer.go:204] Caches are synced for ReplicationController 
	I1107 23:48:43.025220       1 shared_informer.go:204] Caches are synced for deployment 
	I1107 23:48:43.052816       1 shared_informer.go:204] Caches are synced for disruption 
	I1107 23:48:43.052866       1 disruption.go:341] Sending events to api server.
	I1107 23:48:43.090311       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"44b26acf-dfc5-436b-995a-87ae31db5492", APIVersion:"apps/v1", ResourceVersion:"199", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-t2qc9
	I1107 23:48:43.115112       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"b72bd934-20d5-48e6-a551-933e623487b1", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
	I1107 23:48:43.146155       1 range_allocator.go:359] Set node old-k8s-version-729146 PodCIDR to [10.244.0.0/24]
	I1107 23:48:43.205918       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"6d05752f-41ae-4076-9fea-d4e9557230b3", APIVersion:"apps/v1", ResourceVersion:"321", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-bgqhf
	I1107 23:48:43.206128       1 shared_informer.go:204] Caches are synced for persistent volume 
	I1107 23:48:43.245819       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"6d05752f-41ae-4076-9fea-d4e9557230b3", APIVersion:"apps/v1", ResourceVersion:"321", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-bpf97
	I1107 23:48:43.246087       1 shared_informer.go:204] Caches are synced for expand 
	I1107 23:48:43.248802       1 shared_informer.go:204] Caches are synced for resource quota 
	I1107 23:48:43.253386       1 shared_informer.go:204] Caches are synced for resource quota 
	I1107 23:48:43.257169       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1107 23:48:43.257798       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
	I1107 23:48:43.298596       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1107 23:48:43.298701       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E1107 23:48:43.353228       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"44b26acf-dfc5-436b-995a-87ae31db5492", ResourceVersion:"199", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63834997708, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001692280), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00160da80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0016922a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0016922c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001692300)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0017700f0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019089c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00162fda0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00020a040)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001908a08)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1107 23:48:43.403287       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"b72bd934-20d5-48e6-a551-933e623487b1", APIVersion:"apps/v1", ResourceVersion:"334", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
	I1107 23:48:43.419163       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"6d05752f-41ae-4076-9fea-d4e9557230b3", APIVersion:"apps/v1", ResourceVersion:"343", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-bgqhf
	I1107 23:50:02.598021       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"647637a4-01cc-46f4-9558-bc514ba33c53", APIVersion:"apps/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-74d5856cc6 to 1
	I1107 23:50:02.613014       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"a63e37a2-48e0-4fcb-a336-7c7628eae506", APIVersion:"apps/v1", ResourceVersion:"468", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "metrics-server-74d5856cc6-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E1107 23:50:02.619470       1 replica_set.go:450] Sync "kube-system/metrics-server-74d5856cc6" failed with pods "metrics-server-74d5856cc6-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E1107 23:50:02.687359       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E1107 23:50:02.688255       1 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	* 
	* ==> kube-controller-manager [f0d7637b959e] <==
	* I1107 23:51:37.218029       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"a63e37a2-48e0-4fcb-a336-7c7628eae506", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-dngxl
	I1107 23:51:37.220237       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I1107 23:51:37.250397       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"2815268c-8d15-4593-9b71-8fa5608040f7", APIVersion:"apps/v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-f58bn
	I1107 23:51:37.250522       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"3bb38670-7837-42d3-bddd-7ab5cd3bb4f5", APIVersion:"apps/v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-rzpj7
	I1107 23:51:37.253661       1 shared_informer.go:204] Caches are synced for disruption 
	I1107 23:51:37.253805       1 disruption.go:341] Sending events to api server.
	I1107 23:51:37.263249       1 shared_informer.go:204] Caches are synced for stateful set 
	I1107 23:51:37.439616       1 shared_informer.go:204] Caches are synced for taint 
	I1107 23:51:37.439800       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: 
	W1107 23:51:37.439895       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-729146. Assuming now as a timestamp.
	I1107 23:51:37.439921       1 node_lifecycle_controller.go:1108] Controller detected that zone  is now in state Normal.
	I1107 23:51:37.440137       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I1107 23:51:37.440201       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-729146", UID:"01e3772f-4a21-4648-8365-fdff078a5cc8", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-729146 event: Registered Node old-k8s-version-729146 in Controller
	I1107 23:51:37.545095       1 shared_informer.go:204] Caches are synced for resource quota 
	I1107 23:51:37.605536       1 shared_informer.go:204] Caches are synced for attach detach 
	I1107 23:51:37.666777       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1107 23:51:37.668215       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1107 23:51:37.668255       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E1107 23:51:37.920453       1 memcache.go:199] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1107 23:51:37.969991       1 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1107 23:51:38.788844       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1107 23:51:38.788943       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I1107 23:51:38.889294       1 shared_informer.go:204] Caches are synced for resource quota 
	E1107 23:52:09.141020       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1107 23:52:09.669044       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [dd434c70f1a4] <==
	* W1107 23:48:44.865958       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1107 23:48:45.059144       1 node.go:135] Successfully retrieved node IP: 192.168.61.191
	I1107 23:48:45.059221       1 server_others.go:149] Using iptables Proxier.
	I1107 23:48:45.064432       1 server.go:529] Version: v1.16.0
	I1107 23:48:45.066742       1 config.go:131] Starting endpoints config controller
	I1107 23:48:45.066794       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1107 23:48:45.066917       1 config.go:313] Starting service config controller
	I1107 23:48:45.066933       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1107 23:48:45.167047       1 shared_informer.go:204] Caches are synced for service config 
	I1107 23:48:45.167130       1 shared_informer.go:204] Caches are synced for endpoints config 
	E1107 23:50:03.529923       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=488&timeout=5m46s&timeoutSeconds=346&watch=true: dial tcp 192.168.61.191:8443: connect: connection refused
	E1107 23:50:03.529997       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492&timeout=8m19s&timeoutSeconds=499&watch=true: dial tcp 192.168.61.191:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [e9debebfef1f] <==
	* W1107 23:51:23.364608       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1107 23:51:23.375933       1 node.go:135] Successfully retrieved node IP: 192.168.61.191
	I1107 23:51:23.375971       1 server_others.go:149] Using iptables Proxier.
	I1107 23:51:23.376475       1 server.go:529] Version: v1.16.0
	I1107 23:51:23.384951       1 config.go:313] Starting service config controller
	I1107 23:51:23.384981       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1107 23:51:23.384995       1 config.go:131] Starting endpoints config controller
	I1107 23:51:23.385002       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1107 23:51:23.488751       1 shared_informer.go:204] Caches are synced for service config 
	I1107 23:51:23.488759       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [431c1f869b4b] <==
	* I1107 23:51:16.085451       1 serving.go:319] Generated self-signed cert in-memory
	W1107 23:51:20.033930       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 23:51:20.033959       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 23:51:20.033972       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 23:51:20.033981       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 23:51:20.052558       1 server.go:143] Version: v1.16.0
	I1107 23:51:20.052656       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W1107 23:51:20.066001       1 authorization.go:47] Authorization is disabled
	W1107 23:51:20.066023       1 authentication.go:79] Authentication is disabled
	I1107 23:51:20.066035       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1107 23:51:20.066640       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [70c344c74bd2] <==
	* W1107 23:48:23.187503       1 authentication.go:79] Authentication is disabled
	I1107 23:48:23.187609       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1107 23:48:23.189021       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1107 23:48:23.267271       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:48:23.267536       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:48:23.267780       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:48:23.268113       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:48:23.268220       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:48:23.271872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:48:23.282575       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:48:23.283934       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:48:23.284045       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:48:23.291029       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:48:23.295512       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:48:24.269001       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:48:24.286946       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:48:24.290138       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:48:24.294925       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:48:24.297491       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:48:24.299490       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:48:24.300920       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:48:24.302104       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:48:24.303325       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:48:24.304605       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:48:24.305511       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-07 23:50:40 UTC, ends at Tue 2023-11-07 23:52:11 UTC. --
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: E1107 23:51:38.247493    1491 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: E1107 23:51:38.247559    1491 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: E1107 23:51:38.247605    1491 pod_workers.go:191] Error syncing pod 945fb950-2668-4ed1-a694-8bcd76250fde ("metrics-server-74d5856cc6-dngxl_kube-system(945fb950-2668-4ed1-a694-8bcd76250fde)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: W1107 23:51:38.384803    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-f58bn through plugin: invalid network status for
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: W1107 23:51:38.558992    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-74d5856cc6-dngxl through plugin: invalid network status for
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: E1107 23:51:38.567767    1491 pod_workers.go:191] Error syncing pod 945fb950-2668-4ed1-a694-8bcd76250fde ("metrics-server-74d5856cc6-dngxl_kube-system(945fb950-2668-4ed1-a694-8bcd76250fde)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: W1107 23:51:38.569079    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-f58bn through plugin: invalid network status for
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: W1107 23:51:38.752228    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-rzpj7 through plugin: invalid network status for
	Nov 07 23:51:38 old-k8s-version-729146 kubelet[1491]: W1107 23:51:38.753870    1491 pod_container_deletor.go:75] Container "928586e7186f6b369bcdf5c1977d3729d2a1d76509e1f38e65000ff305dad297" not found in pod's containers
	Nov 07 23:51:39 old-k8s-version-729146 kubelet[1491]: W1107 23:51:39.764783    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-rzpj7 through plugin: invalid network status for
	Nov 07 23:51:46 old-k8s-version-729146 kubelet[1491]: W1107 23:51:46.918278    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-f58bn through plugin: invalid network status for
	Nov 07 23:51:53 old-k8s-version-729146 kubelet[1491]: E1107 23:51:53.064593    1491 pod_workers.go:191] Error syncing pod 5bf7dc21-570e-4b93-9a0c-be49a6a60a4d ("storage-provisioner_kube-system(5bf7dc21-570e-4b93-9a0c-be49a6a60a4d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5bf7dc21-570e-4b93-9a0c-be49a6a60a4d)"
	Nov 07 23:51:53 old-k8s-version-729146 kubelet[1491]: E1107 23:51:53.580960    1491 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 07 23:51:53 old-k8s-version-729146 kubelet[1491]: E1107 23:51:53.581059    1491 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 07 23:51:53 old-k8s-version-729146 kubelet[1491]: E1107 23:51:53.581161    1491 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 07 23:51:53 old-k8s-version-729146 kubelet[1491]: E1107 23:51:53.581211    1491 pod_workers.go:191] Error syncing pod 945fb950-2668-4ed1-a694-8bcd76250fde ("metrics-server-74d5856cc6-dngxl_kube-system(945fb950-2668-4ed1-a694-8bcd76250fde)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 07 23:51:54 old-k8s-version-729146 kubelet[1491]: W1107 23:51:54.089099    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-rzpj7 through plugin: invalid network status for
	Nov 07 23:51:55 old-k8s-version-729146 kubelet[1491]: W1107 23:51:55.138285    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-rzpj7 through plugin: invalid network status for
	Nov 07 23:51:55 old-k8s-version-729146 kubelet[1491]: W1107 23:51:55.826282    1491 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod4833d19c-bb22-4e97-86ee-a047eaa00097/3647521315d7cafeb2b49e09ac2a7489a190519893ca39e3180ec253d68a43a0": none of the resources are being tracked.
	Nov 07 23:51:56 old-k8s-version-729146 kubelet[1491]: W1107 23:51:56.163305    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-rzpj7 through plugin: invalid network status for
	Nov 07 23:51:56 old-k8s-version-729146 kubelet[1491]: E1107 23:51:56.176296    1491 pod_workers.go:191] Error syncing pod 4833d19c-bb22-4e97-86ee-a047eaa00097 ("dashboard-metrics-scraper-d6b4b5544-rzpj7_kubernetes-dashboard(4833d19c-bb22-4e97-86ee-a047eaa00097)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-rzpj7_kubernetes-dashboard(4833d19c-bb22-4e97-86ee-a047eaa00097)"
	Nov 07 23:51:57 old-k8s-version-729146 kubelet[1491]: W1107 23:51:57.200879    1491 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-rzpj7 through plugin: invalid network status for
	Nov 07 23:51:57 old-k8s-version-729146 kubelet[1491]: E1107 23:51:57.206089    1491 pod_workers.go:191] Error syncing pod 4833d19c-bb22-4e97-86ee-a047eaa00097 ("dashboard-metrics-scraper-d6b4b5544-rzpj7_kubernetes-dashboard(4833d19c-bb22-4e97-86ee-a047eaa00097)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-rzpj7_kubernetes-dashboard(4833d19c-bb22-4e97-86ee-a047eaa00097)"
	Nov 07 23:52:01 old-k8s-version-729146 kubelet[1491]: E1107 23:52:01.720934    1491 pod_workers.go:191] Error syncing pod 4833d19c-bb22-4e97-86ee-a047eaa00097 ("dashboard-metrics-scraper-d6b4b5544-rzpj7_kubernetes-dashboard(4833d19c-bb22-4e97-86ee-a047eaa00097)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-rzpj7_kubernetes-dashboard(4833d19c-bb22-4e97-86ee-a047eaa00097)"
	Nov 07 23:52:05 old-k8s-version-729146 kubelet[1491]: E1107 23:52:05.068623    1491 pod_workers.go:191] Error syncing pod 945fb950-2668-4ed1-a694-8bcd76250fde ("metrics-server-74d5856cc6-dngxl_kube-system(945fb950-2668-4ed1-a694-8bcd76250fde)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [eb88789cf599] <==
	* 2023/11/07 23:51:46 Using namespace: kubernetes-dashboard
	2023/11/07 23:51:46 Using in-cluster config to connect to apiserver
	2023/11/07 23:51:46 Using secret token for csrf signing
	2023/11/07 23:51:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/11/07 23:51:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/11/07 23:51:46 Successful initial request to the apiserver, version: v1.16.0
	2023/11/07 23:51:46 Generating JWE encryption key
	2023/11/07 23:51:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/11/07 23:51:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/11/07 23:51:47 Initializing JWE encryption key from synchronized object
	2023/11/07 23:51:47 Creating in-cluster Sidecar client
	2023/11/07 23:51:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/07 23:51:47 Serving insecurely on HTTP port: 9090
	2023/11/07 23:51:46 Starting overwatch
	
	* 
	* ==> storage-provisioner [647cac5e521d] <==
	* I1107 23:52:08.312957       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:52:08.327180       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:52:08.328126       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [eb58e2586cfb] <==
	* I1107 23:51:22.531257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1107 23:51:52.548748       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-729146 -n old-k8s-version-729146
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-729146 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-dngxl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-729146 describe pod metrics-server-74d5856cc6-dngxl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-729146 describe pod metrics-server-74d5856cc6-dngxl: exit status 1 (69.10477ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-dngxl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-729146 describe pod metrics-server-74d5856cc6-dngxl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.98s)

                                                
                                    

Test pass (288/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 4.76
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.58
20 TestOffline 102.12
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 152.9
27 TestAddons/parallel/Registry 16.59
28 TestAddons/parallel/Ingress 25.21
29 TestAddons/parallel/InspektorGadget 10.94
30 TestAddons/parallel/MetricsServer 5.83
31 TestAddons/parallel/HelmTiller 14.73
33 TestAddons/parallel/CSI 66.25
34 TestAddons/parallel/Headlamp 15.06
35 TestAddons/parallel/CloudSpanner 5.76
36 TestAddons/parallel/LocalPath 55.2
37 TestAddons/parallel/NvidiaDevicePlugin 5.58
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/StoppedEnableDisable 13.43
42 TestCertOptions 63.24
43 TestCertExpiration 332.78
44 TestDockerFlags 89.86
45 TestForceSystemdFlag 74.1
46 TestForceSystemdEnv 105.09
48 TestKVMDriverInstallOrUpdate 2.81
52 TestErrorSpam/setup 49.37
53 TestErrorSpam/start 0.39
54 TestErrorSpam/status 0.78
55 TestErrorSpam/pause 1.22
56 TestErrorSpam/unpause 1.38
57 TestErrorSpam/stop 3.53
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 64.13
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 42.58
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.09
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.42
69 TestFunctional/serial/CacheCmd/cache/add_local 1.33
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.3
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 40.62
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 1.08
80 TestFunctional/serial/LogsFileCmd 1.12
81 TestFunctional/serial/InvalidService 5.24
83 TestFunctional/parallel/ConfigCmd 0.45
84 TestFunctional/parallel/DashboardCmd 18.44
85 TestFunctional/parallel/DryRun 0.7
86 TestFunctional/parallel/InternationalLanguage 0.16
87 TestFunctional/parallel/StatusCmd 1.11
91 TestFunctional/parallel/ServiceCmdConnect 12.56
92 TestFunctional/parallel/AddonsCmd 0.16
93 TestFunctional/parallel/PersistentVolumeClaim 56.53
95 TestFunctional/parallel/SSHCmd 0.46
96 TestFunctional/parallel/CpCmd 1.04
97 TestFunctional/parallel/MySQL 41.56
98 TestFunctional/parallel/FileSync 0.23
99 TestFunctional/parallel/CertSync 1.52
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
108 TestFunctional/parallel/DockerEnv/bash 0.93
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
112 TestFunctional/parallel/Version/short 0.06
113 TestFunctional/parallel/Version/components 0.73
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.36
119 TestFunctional/parallel/ImageCommands/Setup 1.46
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.23
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.24
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.61
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.82
133 TestFunctional/parallel/ServiceCmd/List 0.29
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
136 TestFunctional/parallel/ServiceCmd/Format 0.37
137 TestFunctional/parallel/ServiceCmd/URL 0.35
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.24
140 TestFunctional/parallel/ProfileCmd/profile_list 0.39
141 TestFunctional/parallel/MountCmd/any-port 11.82
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.26
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.72
146 TestFunctional/parallel/MountCmd/specific-port 1.85
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.12
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.01
150 TestFunctional/delete_minikube_cached_images 0.02
151 TestGvisorAddon 321.15
154 TestImageBuild/serial/Setup 54.38
155 TestImageBuild/serial/NormalBuild 1.91
156 TestImageBuild/serial/BuildWithBuildArg 1.37
157 TestImageBuild/serial/BuildWithDockerIgnore 0.42
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
161 TestIngressAddonLegacy/StartLegacyK8sCluster 74.26
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.38
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.51
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.39
168 TestJSONOutput/start/Command 64.14
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.55
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.56
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 13.11
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.23
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 103.61
200 TestMountStart/serial/StartWithMountFirst 29.15
201 TestMountStart/serial/VerifyMountFirst 0.4
202 TestMountStart/serial/StartWithMountSecond 28.31
203 TestMountStart/serial/VerifyMountSecond 0.41
204 TestMountStart/serial/DeleteFirst 0.7
205 TestMountStart/serial/VerifyMountPostDelete 0.42
206 TestMountStart/serial/Stop 11.42
207 TestMountStart/serial/RestartStopped 23.06
208 TestMountStart/serial/VerifyMountPostStop 0.4
211 TestMultiNode/serial/FreshStart2Nodes 126.25
212 TestMultiNode/serial/DeployApp2Nodes 5.07
213 TestMultiNode/serial/PingHostFrom2Pods 0.94
214 TestMultiNode/serial/AddNode 50.09
215 TestMultiNode/serial/ProfileList 0.21
216 TestMultiNode/serial/CopyFile 7.66
217 TestMultiNode/serial/StopNode 4.02
218 TestMultiNode/serial/StartAfterStop 31.08
219 TestMultiNode/serial/RestartKeepsNodes 184.41
220 TestMultiNode/serial/DeleteNode 1.77
221 TestMultiNode/serial/StopMultiNode 25.55
222 TestMultiNode/serial/RestartMultiNode 103.59
223 TestMultiNode/serial/ValidateNameConflict 53.47
228 TestPreload 168.05
230 TestScheduledStopUnix 120.52
231 TestSkaffold 140.17
234 TestRunningBinaryUpgrade 182.25
236 TestKubernetesUpgrade 202.07
243 TestStoppedBinaryUpgrade/Setup 0.34
250 TestStoppedBinaryUpgrade/Upgrade 199.62
259 TestPause/serial/Start 68.95
260 TestPause/serial/SecondStartNoReconfiguration 45.18
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 57.49
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.57
265 TestPause/serial/Pause 0.58
266 TestPause/serial/VerifyStatus 0.26
267 TestPause/serial/Unpause 0.57
268 TestPause/serial/PauseAgain 0.72
269 TestPause/serial/DeletePaused 1.03
270 TestPause/serial/VerifyDeletedResources 0.23
271 TestNoKubernetes/serial/StartWithStopK8s 70.17
272 TestNoKubernetes/serial/Start 45.21
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
274 TestNoKubernetes/serial/ProfileList 1.13
275 TestNoKubernetes/serial/Stop 2.12
276 TestNoKubernetes/serial/StartNoArgs 45.33
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
278 TestNetworkPlugins/group/auto/Start 92.6
279 TestNetworkPlugins/group/kindnet/Start 96.66
280 TestNetworkPlugins/group/auto/KubeletFlags 0.25
281 TestNetworkPlugins/group/auto/NetCatPod 12.52
282 TestNetworkPlugins/group/auto/DNS 0.2
283 TestNetworkPlugins/group/auto/Localhost 0.18
284 TestNetworkPlugins/group/auto/HairPin 0.16
285 TestNetworkPlugins/group/calico/Start 100.38
286 TestNetworkPlugins/group/custom-flannel/Start 90.46
287 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
288 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
289 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
290 TestNetworkPlugins/group/kindnet/DNS 0.43
291 TestNetworkPlugins/group/kindnet/Localhost 0.19
292 TestNetworkPlugins/group/kindnet/HairPin 0.19
293 TestNetworkPlugins/group/false/Start 81.57
294 TestNetworkPlugins/group/enable-default-cni/Start 111.77
295 TestNetworkPlugins/group/calico/ControllerPod 5.03
296 TestNetworkPlugins/group/calico/KubeletFlags 0.22
297 TestNetworkPlugins/group/calico/NetCatPod 13.38
298 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
299 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
300 TestNetworkPlugins/group/custom-flannel/DNS 0.24
301 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
302 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
303 TestNetworkPlugins/group/calico/DNS 0.28
304 TestNetworkPlugins/group/calico/Localhost 0.2
305 TestNetworkPlugins/group/calico/HairPin 0.21
306 TestNetworkPlugins/group/flannel/Start 93.46
307 TestNetworkPlugins/group/bridge/Start 146.53
308 TestNetworkPlugins/group/false/KubeletFlags 0.23
309 TestNetworkPlugins/group/false/NetCatPod 11.39
310 TestNetworkPlugins/group/false/DNS 0.19
311 TestNetworkPlugins/group/false/Localhost 0.18
312 TestNetworkPlugins/group/false/HairPin 0.19
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
314 TestNetworkPlugins/group/kubenet/Start 112.98
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.41
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
320 TestStartStop/group/old-k8s-version/serial/FirstStart 149.82
321 TestNetworkPlugins/group/flannel/ControllerPod 5.03
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
323 TestNetworkPlugins/group/flannel/NetCatPod 14.4
324 TestNetworkPlugins/group/flannel/DNS 0.25
325 TestNetworkPlugins/group/flannel/Localhost 0.16
326 TestNetworkPlugins/group/flannel/HairPin 0.19
328 TestStartStop/group/no-preload/serial/FirstStart 93.26
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
330 TestNetworkPlugins/group/bridge/NetCatPod 11.37
331 TestNetworkPlugins/group/bridge/DNS 0.2
332 TestNetworkPlugins/group/bridge/Localhost 0.16
333 TestNetworkPlugins/group/bridge/HairPin 0.17
334 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
335 TestNetworkPlugins/group/kubenet/NetCatPod 13.47
337 TestStartStop/group/embed-certs/serial/FirstStart 77.06
338 TestNetworkPlugins/group/kubenet/DNS 0.25
339 TestNetworkPlugins/group/kubenet/Localhost 0.23
340 TestNetworkPlugins/group/kubenet/HairPin 0.21
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.97
343 TestStartStop/group/no-preload/serial/DeployApp 10.54
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.24
345 TestStartStop/group/no-preload/serial/Stop 13.16
346 TestStartStop/group/old-k8s-version/serial/DeployApp 8.57
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.03
348 TestStartStop/group/old-k8s-version/serial/Stop 13.15
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.36
350 TestStartStop/group/no-preload/serial/SecondStart 329.62
351 TestStartStop/group/embed-certs/serial/DeployApp 9.52
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
353 TestStartStop/group/old-k8s-version/serial/SecondStart 102.84
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
355 TestStartStop/group/embed-certs/serial/Stop 13.16
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.48
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/embed-certs/serial/SecondStart 321.21
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 328.9
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
366 TestStartStop/group/old-k8s-version/serial/Pause 2.86
368 TestStartStop/group/newest-cni/serial/FirstStart 69.74
369 TestStartStop/group/newest-cni/serial/DeployApp 0
370 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.18
371 TestStartStop/group/newest-cni/serial/Stop 8.13
372 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
373 TestStartStop/group/newest-cni/serial/SecondStart 49.31
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
377 TestStartStop/group/newest-cni/serial/Pause 2.37
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
381 TestStartStop/group/no-preload/serial/Pause 2.76
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
385 TestStartStop/group/embed-certs/serial/Pause 2.61
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.45
x
+
TestDownloadOnly/v1.16.0/json-events (7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-365601 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-365601 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (6.997430792s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-365601
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-365601: exit status 85 (75.623279ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-365601 | jenkins | v1.32.0 | 07 Nov 23 23:00 UTC |          |
	|         | -p download-only-365601        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:00:54
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:00:54.614737   16878 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:00:54.614917   16878 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:00:54.614927   16878 out.go:309] Setting ErrFile to fd 2...
	I1107 23:00:54.614934   16878 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:00:54.615147   16878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
	W1107 23:00:54.615291   16878 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-9672/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-9672/.minikube/config/config.json: no such file or directory
	I1107 23:00:54.615875   16878 out.go:303] Setting JSON to true
	I1107 23:00:54.616749   16878 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2608,"bootTime":1699395447,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:00:54.616810   16878 start.go:138] virtualization: kvm guest
	I1107 23:00:54.619294   16878 out.go:97] [download-only-365601] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:00:54.621323   16878 out.go:169] MINIKUBE_LOCATION=17585
	W1107 23:00:54.619402   16878 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17585-9672/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 23:00:54.619442   16878 notify.go:220] Checking for updates...
	I1107 23:00:54.624501   16878 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:00:54.625990   16878 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:00:54.627809   16878 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	I1107 23:00:54.629272   16878 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1107 23:00:54.631976   16878 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:00:54.632280   16878 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:00:54.739493   16878 out.go:97] Using the kvm2 driver based on user configuration
	I1107 23:00:54.739517   16878 start.go:298] selected driver: kvm2
	I1107 23:00:54.739523   16878 start.go:902] validating driver "kvm2" against <nil>
	I1107 23:00:54.739830   16878 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:00:54.739960   16878 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:00:54.754778   16878 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:00:54.754840   16878 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:00:54.755501   16878 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1107 23:00:54.755703   16878 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 23:00:54.755735   16878 cni.go:84] Creating CNI manager for ""
	I1107 23:00:54.755748   16878 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1107 23:00:54.755756   16878 start_flags.go:323] config:
	{Name:download-only-365601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-365601 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:00:54.756043   16878 iso.go:125] acquiring lock: {Name:mk6a728cebb26babf756ae6ad70b6747ae55e33b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:00:54.758293   16878 out.go:97] Downloading VM boot image ...
	I1107 23:00:54.758330   16878 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17585-9672/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1107 23:00:57.268426   16878 out.go:97] Starting control plane node download-only-365601 in cluster download-only-365601
	I1107 23:00:57.268456   16878 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 23:00:57.292786   16878 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 23:00:57.292815   16878 cache.go:56] Caching tarball of preloaded images
	I1107 23:00:57.292998   16878 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 23:00:57.295324   16878 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 23:00:57.295347   16878 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 23:00:57.331258   16878 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17585-9672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-365601"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (4.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-365601 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-365601 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 : (4.757901434s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (4.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-365601
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-365601: exit status 85 (73.230082ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-365601 | jenkins | v1.32.0 | 07 Nov 23 23:00 UTC |          |
	|         | -p download-only-365601        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-365601 | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |          |
	|         | -p download-only-365601        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:01:01
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:01:01.690682   16924 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:01:01.690947   16924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:01.690959   16924 out.go:309] Setting ErrFile to fd 2...
	I1107 23:01:01.690963   16924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:01.691201   16924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
	W1107 23:01:01.691337   16924 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-9672/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-9672/.minikube/config/config.json: no such file or directory
	I1107 23:01:01.691794   16924 out.go:303] Setting JSON to true
	I1107 23:01:01.692616   16924 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2615,"bootTime":1699395447,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:01:01.692676   16924 start.go:138] virtualization: kvm guest
	I1107 23:01:01.695232   16924 out.go:97] [download-only-365601] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:01:01.697135   16924 out.go:169] MINIKUBE_LOCATION=17585
	I1107 23:01:01.695410   16924 notify.go:220] Checking for updates...
	I1107 23:01:01.700642   16924 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:01:01.703000   16924 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:01:01.704512   16924 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	I1107 23:01:01.706154   16924 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-365601"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-365601
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-231857 --alsologtostderr --binary-mirror http://127.0.0.1:36481 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-231857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-231857
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (102.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-327928 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-327928 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m41.081439507s)
helpers_test.go:175: Cleaning up "offline-docker-327928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-327928
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-327928: (1.036529651s)
--- PASS: TestOffline (102.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-625969
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-625969: exit status 85 (65.310866ms)

                                                
                                                
-- stdout --
	* Profile "addons-625969" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-625969"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-625969
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-625969: exit status 85 (66.263625ms)

                                                
                                                
-- stdout --
	* Profile "addons-625969" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-625969"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (152.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-625969 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-625969 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.895441753s)
--- PASS: TestAddons/Setup (152.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 21.308449ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-4lzwf" [661f4812-4783-4551-a800-04a9014f0167] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.02118569s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4knq8" [fa76ae12-d4bc-4d9d-aa6c-8394b5affc9d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017057629s
addons_test.go:339: (dbg) Run:  kubectl --context addons-625969 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-625969 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-625969 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.602441723s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.59s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-625969 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-625969 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-625969 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4da24c72-3ec0-4f3f-acf6-9c0bc27ae107] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4da24c72-3ec0-4f3f-acf6-9c0bc27ae107] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.015192053s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-625969 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.105
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-625969 addons disable ingress-dns --alsologtostderr -v=1: (1.766856389s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-625969 addons disable ingress --alsologtostderr -v=1: (7.78137465s)
--- PASS: TestAddons/parallel/Ingress (25.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8pdqt" [5fd7c82e-ea99-4a74-9ea7-a114b0c60302] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01321155s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-625969
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-625969: (5.923779338s)
--- PASS: TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 23.082971ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-vts5q" [bb3bfd14-d874-4774-b594-7435917b6966] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015925642s
addons_test.go:414: (dbg) Run:  kubectl --context addons-625969 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.73s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.89119ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-g7mxn" [1fb24280-29a8-426c-897d-aad8bc6caa17] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013285238s
addons_test.go:472: (dbg) Run:  kubectl --context addons-625969 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-625969 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.016318028s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 22.305683ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-625969 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/11/07 23:03:56 [DEBUG] GET http://192.168.39.105:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-625969 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [147803f3-f160-4f95-bc2f-b2404b98589a] Pending
helpers_test.go:344: "task-pv-pod" [147803f3-f160-4f95-bc2f-b2404b98589a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [147803f3-f160-4f95-bc2f-b2404b98589a] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.010448866s
addons_test.go:583: (dbg) Run:  kubectl --context addons-625969 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-625969 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-625969 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-625969 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-625969 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-625969 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-625969 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-625969 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b6f63810-bdea-4d00-8344-abf6311f40e6] Pending
helpers_test.go:344: "task-pv-pod-restore" [b6f63810-bdea-4d00-8344-abf6311f40e6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b6f63810-bdea-4d00-8344-abf6311f40e6] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.010761973s
addons_test.go:625: (dbg) Run:  kubectl --context addons-625969 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-625969 delete pod task-pv-pod-restore: (1.238925739s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-625969 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-625969 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-625969 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.678891042s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-625969 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-625969 --alsologtostderr -v=1: (2.033222483s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-j9zjb" [f6dbc1d1-1404-4e20-a0bc-d7ca4227b958] Pending
helpers_test.go:344: "headlamp-94b766c-j9zjb" [f6dbc1d1-1404-4e20-a0bc-d7ca4227b958] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-j9zjb" [f6dbc1d1-1404-4e20-a0bc-d7ca4227b958] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.02839982s
--- PASS: TestAddons/parallel/Headlamp (15.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-d2dcb" [d15e472a-207f-44ad-a884-0ffe757aa247] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016566764s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-625969
--- PASS: TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-625969 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-625969 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-625969 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9b8e12ff-0c09-497e-9a30-c671650d42e6] Pending
helpers_test.go:344: "test-local-path" [9b8e12ff-0c09-497e-9a30-c671650d42e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9b8e12ff-0c09-497e-9a30-c671650d42e6] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9b8e12ff-0c09-497e-9a30-c671650d42e6] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.026193047s
addons_test.go:890: (dbg) Run:  kubectl --context addons-625969 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 ssh "cat /opt/local-path-provisioner/pvc-8f1f091f-26b1-4ba1-a114-db598756bad4_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-625969 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-625969 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-625969 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-625969 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.511380415s)
--- PASS: TestAddons/parallel/LocalPath (55.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9fpdk" [9f3726b3-3092-4f9c-bc3d-9ef039af1ef4] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.018362664s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-625969
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-625969 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-625969 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-625969
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-625969: (13.116834089s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-625969
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-625969
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-625969
--- PASS: TestAddons/StoppedEnableDisable (13.43s)

                                                
                                    
x
+
TestCertOptions (63.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-769810 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1107 23:41:43.493841   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-769810 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m1.552334272s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-769810 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-769810 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-769810 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-769810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-769810
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-769810: (1.094867645s)
--- PASS: TestCertOptions (63.24s)

                                                
                                    
x
+
TestCertExpiration (332.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-694867 --memory=2048 --cert-expiration=3m --driver=kvm2 
E1107 23:39:40.434345   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:40.439615   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:40.450103   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:40.470413   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:40.510708   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:40.591032   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:40.751473   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:41.072026   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:41.712965   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:42.993510   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:45.554388   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:39:50.674845   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:40:00.915392   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-694867 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m28.286710436s)
E1107 23:41:02.356326   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-694867 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E1107 23:43:57.772546   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-694867 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m3.23936068s)
helpers_test.go:175: Cleaning up "cert-expiration-694867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-694867
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-694867: (1.249250097s)
--- PASS: TestCertExpiration (332.78s)

                                                
                                    
x
+
TestDockerFlags (89.86s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-690186 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E1107 23:40:21.395681   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-690186 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m28.291278831s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-690186 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-690186 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-690186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-690186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-690186: (1.070913376s)
--- PASS: TestDockerFlags (89.86s)

                                                
                                    
x
+
TestForceSystemdFlag (74.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-187357 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-187357 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m12.737169872s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-187357 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-187357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-187357
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-187357: (1.090479722s)
--- PASS: TestForceSystemdFlag (74.10s)

                                                
                                    
x
+
TestForceSystemdEnv (105.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-153350 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E1107 23:38:40.443693   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-153350 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m43.821676925s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-153350 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-153350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-153350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-153350: (1.0132416s)
--- PASS: TestForceSystemdEnv (105.09s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.81s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.81s)

                                                
                                    
x
+
TestErrorSpam/setup (49.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-004573 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-004573 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-004573 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-004573 --driver=kvm2 : (49.366764544s)
--- PASS: TestErrorSpam/setup (49.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 pause
--- PASS: TestErrorSpam/pause (1.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 unpause
--- PASS: TestErrorSpam/unpause (1.38s)

                                                
                                    
x
+
TestErrorSpam/stop (3.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 stop: (3.359034167s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-004573 --log_dir /tmp/nospam-004573 stop
--- PASS: TestErrorSpam/stop (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17585-9672/.minikube/files/etc/test/nested/copy/16866/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-277453 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-277453 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m4.13396273s)
--- PASS: TestFunctional/serial/StartWithProxy (64.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-277453 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-277453 --alsologtostderr -v=8: (42.580783689s)
functional_test.go:659: soft start took 42.581370113s for "functional-277453" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-277453 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-277453 /tmp/TestFunctionalserialCacheCmdcacheadd_local2875084003/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 cache add minikube-local-cache-test:functional-277453
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 cache delete minikube-local-cache-test:functional-277453
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-277453
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-277453 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.680242ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 kubectl -- --context functional-277453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-277453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-277453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1107 23:08:40.445624   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:40.451273   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:40.461500   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:40.481756   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:40.522042   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:40.602405   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:40.762814   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:41.083372   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:41.724435   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:43.004965   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:08:45.566050   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-277453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.620340523s)
functional_test.go:757: restart took 40.620449105s for "functional-277453" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-277453 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 logs: (1.080740965s)
--- PASS: TestFunctional/serial/LogsCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 logs --file /tmp/TestFunctionalserialLogsFileCmd4043797188/001/logs.txt
E1107 23:08:50.687241   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 logs --file /tmp/TestFunctionalserialLogsFileCmd4043797188/001/logs.txt: (1.11838365s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-277453 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-277453
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-277453: exit status 115 (323.724642ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.2:31108 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-277453 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-277453 delete -f testdata/invalidsvc.yaml: (1.605739026s)
--- PASS: TestFunctional/serial/InvalidService (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-277453 config get cpus: exit status 14 (65.014172ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-277453 config get cpus: exit status 14 (63.399983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-277453 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-277453 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23537: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-277453 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-277453 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (496.264427ms)

                                                
                                                
-- stdout --
	* [functional-277453] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:09:12.550688   23245 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:09:12.551068   23245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:09:12.551077   23245 out.go:309] Setting ErrFile to fd 2...
	I1107 23:09:12.551084   23245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:09:12.551397   23245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
	I1107 23:09:12.552105   23245 out.go:303] Setting JSON to false
	I1107 23:09:12.553405   23245 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3106,"bootTime":1699395447,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:09:12.553487   23245 start.go:138] virtualization: kvm guest
	I1107 23:09:12.555943   23245 out.go:177] * [functional-277453] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:09:12.557736   23245 notify.go:220] Checking for updates...
	I1107 23:09:12.557756   23245 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:09:12.559658   23245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:09:12.561426   23245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:09:12.565512   23245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	I1107 23:09:12.571839   23245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:09:12.574973   23245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:09:12.577122   23245 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:09:12.577789   23245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:09:12.577925   23245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:09:12.609079   23245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I1107 23:09:12.609739   23245 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:09:12.610479   23245 main.go:141] libmachine: Using API Version  1
	I1107 23:09:12.610503   23245 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:09:12.610914   23245 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:09:12.611067   23245 main.go:141] libmachine: (functional-277453) Calling .DriverName
	I1107 23:09:12.611322   23245 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:09:12.611589   23245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:09:12.611611   23245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:09:12.634580   23245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I1107 23:09:12.635009   23245 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:09:12.635510   23245 main.go:141] libmachine: Using API Version  1
	I1107 23:09:12.635528   23245 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:09:12.635896   23245 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:09:12.636070   23245 main.go:141] libmachine: (functional-277453) Calling .DriverName
	I1107 23:09:12.700609   23245 out.go:177] * Using the kvm2 driver based on existing profile
	I1107 23:09:12.829819   23245 start.go:298] selected driver: kvm2
	I1107 23:09:12.829842   23245 start.go:902] validating driver "kvm2" against &{Name:functional-277453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-277
453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/j
enkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:09:12.829967   23245 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:09:12.960628   23245 out.go:177] 
	W1107 23:09:12.962394   23245 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 23:09:12.964145   23245 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-277453 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-277453 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-277453 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (158.453279ms)

                                                
                                                
-- stdout --
	* [functional-277453] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:09:13.229787   23377 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:09:13.229930   23377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:09:13.229940   23377 out.go:309] Setting ErrFile to fd 2...
	I1107 23:09:13.229947   23377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:09:13.230226   23377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
	I1107 23:09:13.230729   23377 out.go:303] Setting JSON to false
	I1107 23:09:13.231611   23377 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3106,"bootTime":1699395447,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:09:13.231684   23377 start.go:138] virtualization: kvm guest
	I1107 23:09:13.234563   23377 out.go:177] * [functional-277453] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1107 23:09:13.236474   23377 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:09:13.238121   23377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:09:13.236520   23377 notify.go:220] Checking for updates...
	I1107 23:09:13.239954   23377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	I1107 23:09:13.241568   23377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	I1107 23:09:13.243158   23377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:09:13.245186   23377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:09:13.247372   23377 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:09:13.247970   23377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:09:13.248043   23377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:09:13.263170   23377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I1107 23:09:13.263540   23377 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:09:13.263972   23377 main.go:141] libmachine: Using API Version  1
	I1107 23:09:13.263989   23377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:09:13.264271   23377 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:09:13.264429   23377 main.go:141] libmachine: (functional-277453) Calling .DriverName
	I1107 23:09:13.264617   23377 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:09:13.264894   23377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:09:13.264920   23377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:09:13.279038   23377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40355
	I1107 23:09:13.279370   23377 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:09:13.279927   23377 main.go:141] libmachine: Using API Version  1
	I1107 23:09:13.279948   23377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:09:13.280294   23377 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:09:13.280454   23377 main.go:141] libmachine: (functional-277453) Calling .DriverName
	I1107 23:09:13.315237   23377 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1107 23:09:13.316858   23377 start.go:298] selected driver: kvm2
	I1107 23:09:13.316873   23377 start.go:902] validating driver "kvm2" against &{Name:functional-277453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-277
453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/j
enkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:09:13.316969   23377 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:09:13.319313   23377 out.go:177] 
	W1107 23:09:13.320969   23377 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 23:09:13.324547   23377 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-277453 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-277453 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-lwcb5" [be4074ef-b194-435d-a5e8-c9c6aabbcbb0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-lwcb5" [be4074ef-b194-435d-a5e8-c9c6aabbcbb0] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.022051216s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.2:30114
functional_test.go:1674: http://192.168.39.2:30114: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-lwcb5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.2:30114
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d2a692df-bb6d-4276-9d81-f8d17b50ef5a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014399553s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-277453 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-277453 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-277453 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-277453 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-277453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3fd8bc43-3c9f-4c54-806e-ac8cdfab0751] Pending
helpers_test.go:344: "sp-pod" [3fd8bc43-3c9f-4c54-806e-ac8cdfab0751] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3fd8bc43-3c9f-4c54-806e-ac8cdfab0751] Running
E1107 23:09:21.408708   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.021903652s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-277453 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-277453 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-277453 delete -f testdata/storage-provisioner/pod.yaml: (1.997434168s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-277453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7ea36de4-d034-4426-b7db-c91ff524b563] Pending
helpers_test.go:344: "sp-pod" [7ea36de4-d034-4426-b7db-c91ff524b563] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2023/11/07 23:09:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [7ea36de4-d034-4426-b7db-c91ff524b563] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.03051271s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-277453 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh -n functional-277453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 cp functional-277453:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3504665914/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh -n functional-277453 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (41.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-277453 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-8vzx9" [8f943280-5e34-40db-aea4-c31b3789c206] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-8vzx9" [8f943280-5e34-40db-aea4-c31b3789c206] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 36.031525276s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-277453 exec mysql-859648c796-8vzx9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-277453 exec mysql-859648c796-8vzx9 -- mysql -ppassword -e "show databases;": exit status 1 (166.408153ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-277453 exec mysql-859648c796-8vzx9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-277453 exec mysql-859648c796-8vzx9 -- mysql -ppassword -e "show databases;": exit status 1 (179.514166ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-277453 exec mysql-859648c796-8vzx9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-277453 exec mysql-859648c796-8vzx9 -- mysql -ppassword -e "show databases;": exit status 1 (153.986155ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-277453 exec mysql-859648c796-8vzx9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (41.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16866/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo cat /etc/test/nested/copy/16866/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16866.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo cat /etc/ssl/certs/16866.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16866.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo cat /usr/share/ca-certificates/16866.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/168662.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo cat /etc/ssl/certs/168662.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/168662.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo cat /usr/share/ca-certificates/168662.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-277453 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-277453 ssh "sudo systemctl is-active crio": exit status 1 (248.266331ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-277453 docker-env) && out/minikube-linux-amd64 status -p functional-277453"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-277453 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-277453 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-277453
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-277453
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-277453 image ls --format short --alsologtostderr:
I1107 23:09:27.924465   24186 out.go:296] Setting OutFile to fd 1 ...
I1107 23:09:27.924617   24186 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:27.924627   24186 out.go:309] Setting ErrFile to fd 2...
I1107 23:09:27.924634   24186 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:27.924851   24186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
I1107 23:09:27.925419   24186 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:27.925514   24186 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:27.925867   24186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:27.925914   24186 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:27.941088   24186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
I1107 23:09:27.941573   24186 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:27.942142   24186 main.go:141] libmachine: Using API Version  1
I1107 23:09:27.942166   24186 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:27.942586   24186 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:27.942812   24186 main.go:141] libmachine: (functional-277453) Calling .GetState
I1107 23:09:27.944919   24186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:27.944974   24186 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:27.959485   24186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
I1107 23:09:27.959884   24186 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:27.960318   24186 main.go:141] libmachine: Using API Version  1
I1107 23:09:27.960338   24186 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:27.960621   24186 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:27.960810   24186 main.go:141] libmachine: (functional-277453) Calling .DriverName
I1107 23:09:27.961071   24186 ssh_runner.go:195] Run: systemctl --version
I1107 23:09:27.961105   24186 main.go:141] libmachine: (functional-277453) Calling .GetSSHHostname
I1107 23:09:27.963823   24186 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:27.964205   24186 main.go:141] libmachine: (functional-277453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:de:93", ip: ""} in network mk-functional-277453: {Iface:virbr1 ExpiryTime:2023-11-08 00:06:31 +0000 UTC Type:0 Mac:52:54:00:5f:de:93 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-277453 Clientid:01:52:54:00:5f:de:93}
I1107 23:09:27.964243   24186 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined IP address 192.168.39.2 and MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:27.964399   24186 main.go:141] libmachine: (functional-277453) Calling .GetSSHPort
I1107 23:09:27.964580   24186 main.go:141] libmachine: (functional-277453) Calling .GetSSHKeyPath
I1107 23:09:27.964755   24186 main.go:141] libmachine: (functional-277453) Calling .GetSSHUsername
I1107 23:09:27.964973   24186 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/functional-277453/id_rsa Username:docker}
I1107 23:09:28.052823   24186 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1107 23:09:28.082861   24186 main.go:141] libmachine: Making call to close driver server
I1107 23:09:28.082880   24186 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:28.083158   24186 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:28.083177   24186 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:09:28.083191   24186 main.go:141] libmachine: Making call to close driver server
I1107 23:09:28.083201   24186 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:28.083485   24186 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:28.083522   24186 main.go:141] libmachine: (functional-277453) DBG | Closing plugin on server side
I1107 23:09:28.083531   24186 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-277453 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-277453 | 7ee081b5dc595 | 30B    |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| gcr.io/google-containers/addon-resizer      | functional-277453 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | c20060033e06f | 187MB  |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-277453 | bf86586e2bc63 | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-277453 image ls --format table --alsologtostderr:
I1107 23:09:31.822431   24354 out.go:296] Setting OutFile to fd 1 ...
I1107 23:09:31.822561   24354 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:31.822570   24354 out.go:309] Setting ErrFile to fd 2...
I1107 23:09:31.822574   24354 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:31.822782   24354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
I1107 23:09:31.823416   24354 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:31.823515   24354 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:31.823936   24354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:31.823984   24354 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:31.841104   24354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
I1107 23:09:31.841559   24354 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:31.842181   24354 main.go:141] libmachine: Using API Version  1
I1107 23:09:31.842206   24354 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:31.842501   24354 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:31.842685   24354 main.go:141] libmachine: (functional-277453) Calling .GetState
I1107 23:09:31.844536   24354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:31.844583   24354 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:31.858887   24354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
I1107 23:09:31.859287   24354 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:31.859689   24354 main.go:141] libmachine: Using API Version  1
I1107 23:09:31.859713   24354 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:31.860047   24354 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:31.860248   24354 main.go:141] libmachine: (functional-277453) Calling .DriverName
I1107 23:09:31.860452   24354 ssh_runner.go:195] Run: systemctl --version
I1107 23:09:31.860477   24354 main.go:141] libmachine: (functional-277453) Calling .GetSSHHostname
I1107 23:09:31.863045   24354 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:31.863439   24354 main.go:141] libmachine: (functional-277453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:de:93", ip: ""} in network mk-functional-277453: {Iface:virbr1 ExpiryTime:2023-11-08 00:06:31 +0000 UTC Type:0 Mac:52:54:00:5f:de:93 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-277453 Clientid:01:52:54:00:5f:de:93}
I1107 23:09:31.863475   24354 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined IP address 192.168.39.2 and MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:31.863557   24354 main.go:141] libmachine: (functional-277453) Calling .GetSSHPort
I1107 23:09:31.863724   24354 main.go:141] libmachine: (functional-277453) Calling .GetSSHKeyPath
I1107 23:09:31.863884   24354 main.go:141] libmachine: (functional-277453) Calling .GetSSHUsername
I1107 23:09:31.864033   24354 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/functional-277453/id_rsa Username:docker}
I1107 23:09:31.963423   24354 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1107 23:09:31.994313   24354 main.go:141] libmachine: Making call to close driver server
I1107 23:09:31.994336   24354 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:31.994632   24354 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:31.994654   24354 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:09:31.994665   24354 main.go:141] libmachine: Making call to close driver server
I1107 23:09:31.994665   24354 main.go:141] libmachine: (functional-277453) DBG | Closing plugin on server side
I1107 23:09:31.994674   24354 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:31.994856   24354 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:31.994870   24354 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:09:31.994881   24354 main.go:141] libmachine: (functional-277453) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-277453 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"7ee081b5dc5955cb72d70b5537219f02ea6b1145be9796dcec78b8c98af58655","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-277453"],"size":"30"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ffd4cfbbe753e62419e12
9ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-277453"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],
"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"bf86586e2bc637219dde94dbde536d7989eafa24238c737a8efb748a8067987a","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-277453"],"size":"1240000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"s
ize":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-277453 image ls --format json --alsologtostderr:
I1107 23:09:31.752562   24337 out.go:296] Setting OutFile to fd 1 ...
I1107 23:09:31.752717   24337 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:31.752727   24337 out.go:309] Setting ErrFile to fd 2...
I1107 23:09:31.752732   24337 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:31.752925   24337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
I1107 23:09:31.753548   24337 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:31.753663   24337 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:31.754048   24337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:31.754108   24337 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:31.768773   24337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34387
I1107 23:09:31.769241   24337 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:31.769955   24337 main.go:141] libmachine: Using API Version  1
I1107 23:09:31.769987   24337 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:31.770382   24337 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:31.770596   24337 main.go:141] libmachine: (functional-277453) Calling .GetState
I1107 23:09:31.772722   24337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:31.772771   24337 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:31.788058   24337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
I1107 23:09:31.788499   24337 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:31.789002   24337 main.go:141] libmachine: Using API Version  1
I1107 23:09:31.789026   24337 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:31.789311   24337 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:31.789508   24337 main.go:141] libmachine: (functional-277453) Calling .DriverName
I1107 23:09:31.789694   24337 ssh_runner.go:195] Run: systemctl --version
I1107 23:09:31.789719   24337 main.go:141] libmachine: (functional-277453) Calling .GetSSHHostname
I1107 23:09:31.792699   24337 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:31.793177   24337 main.go:141] libmachine: (functional-277453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:de:93", ip: ""} in network mk-functional-277453: {Iface:virbr1 ExpiryTime:2023-11-08 00:06:31 +0000 UTC Type:0 Mac:52:54:00:5f:de:93 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-277453 Clientid:01:52:54:00:5f:de:93}
I1107 23:09:31.793288   24337 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined IP address 192.168.39.2 and MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:31.793609   24337 main.go:141] libmachine: (functional-277453) Calling .GetSSHPort
I1107 23:09:31.793797   24337 main.go:141] libmachine: (functional-277453) Calling .GetSSHKeyPath
I1107 23:09:31.793950   24337 main.go:141] libmachine: (functional-277453) Calling .GetSSHUsername
I1107 23:09:31.794103   24337 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/functional-277453/id_rsa Username:docker}
I1107 23:09:31.884433   24337 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1107 23:09:31.914442   24337 main.go:141] libmachine: Making call to close driver server
I1107 23:09:31.914456   24337 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:31.914756   24337 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:31.914776   24337 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:09:31.914790   24337 main.go:141] libmachine: Making call to close driver server
I1107 23:09:31.914797   24337 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:31.915068   24337 main.go:141] libmachine: (functional-277453) DBG | Closing plugin on server side
I1107 23:09:31.915131   24337 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:31.915161   24337 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-277453 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-277453
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7ee081b5dc5955cb72d70b5537219f02ea6b1145be9796dcec78b8c98af58655
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-277453
size: "30"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-277453 image ls --format yaml --alsologtostderr:
I1107 23:09:28.154959   24222 out.go:296] Setting OutFile to fd 1 ...
I1107 23:09:28.155144   24222 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:28.155157   24222 out.go:309] Setting ErrFile to fd 2...
I1107 23:09:28.155165   24222 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:28.155415   24222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
I1107 23:09:28.156216   24222 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:28.156378   24222 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:28.156952   24222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:28.157019   24222 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:28.171945   24222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
I1107 23:09:28.172451   24222 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:28.173163   24222 main.go:141] libmachine: Using API Version  1
I1107 23:09:28.173195   24222 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:28.173649   24222 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:28.173904   24222 main.go:141] libmachine: (functional-277453) Calling .GetState
I1107 23:09:28.176162   24222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:28.176214   24222 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:28.191754   24222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39061
I1107 23:09:28.192208   24222 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:28.192974   24222 main.go:141] libmachine: Using API Version  1
I1107 23:09:28.193010   24222 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:28.193477   24222 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:28.193669   24222 main.go:141] libmachine: (functional-277453) Calling .DriverName
I1107 23:09:28.193904   24222 ssh_runner.go:195] Run: systemctl --version
I1107 23:09:28.193926   24222 main.go:141] libmachine: (functional-277453) Calling .GetSSHHostname
I1107 23:09:28.196869   24222 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:28.197227   24222 main.go:141] libmachine: (functional-277453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:de:93", ip: ""} in network mk-functional-277453: {Iface:virbr1 ExpiryTime:2023-11-08 00:06:31 +0000 UTC Type:0 Mac:52:54:00:5f:de:93 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-277453 Clientid:01:52:54:00:5f:de:93}
I1107 23:09:28.197252   24222 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined IP address 192.168.39.2 and MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:28.197438   24222 main.go:141] libmachine: (functional-277453) Calling .GetSSHPort
I1107 23:09:28.197645   24222 main.go:141] libmachine: (functional-277453) Calling .GetSSHKeyPath
I1107 23:09:28.197799   24222 main.go:141] libmachine: (functional-277453) Calling .GetSSHUsername
I1107 23:09:28.197948   24222 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/functional-277453/id_rsa Username:docker}
I1107 23:09:28.287661   24222 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1107 23:09:28.329870   24222 main.go:141] libmachine: Making call to close driver server
I1107 23:09:28.329882   24222 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:28.330213   24222 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:28.330234   24222 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:09:28.330245   24222 main.go:141] libmachine: (functional-277453) DBG | Closing plugin on server side
I1107 23:09:28.330253   24222 main.go:141] libmachine: Making call to close driver server
I1107 23:09:28.330264   24222 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:28.330559   24222 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:28.330583   24222 main.go:141] libmachine: (functional-277453) DBG | Closing plugin on server side
I1107 23:09:28.330605   24222 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-277453 ssh pgrep buildkitd: exit status 1 (231.687112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image build -t localhost/my-image:functional-277453 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 image build -t localhost/my-image:functional-277453 testdata/build --alsologtostderr: (2.88213489s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-277453 image build -t localhost/my-image:functional-277453 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in fa6094cdf793
Removing intermediate container fa6094cdf793
---> f1be3d16edf2
Step 3/3 : ADD content.txt /
---> bf86586e2bc6
Successfully built bf86586e2bc6
Successfully tagged localhost/my-image:functional-277453
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-277453 image build -t localhost/my-image:functional-277453 testdata/build --alsologtostderr:
I1107 23:09:28.625172   24286 out.go:296] Setting OutFile to fd 1 ...
I1107 23:09:28.626248   24286 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:28.626260   24286 out.go:309] Setting ErrFile to fd 2...
I1107 23:09:28.626265   24286 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:09:28.626480   24286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
I1107 23:09:28.627070   24286 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:28.627611   24286 config.go:182] Loaded profile config "functional-277453": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 23:09:28.627994   24286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:28.628039   24286 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:28.643220   24286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36967
I1107 23:09:28.643831   24286 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:28.644467   24286 main.go:141] libmachine: Using API Version  1
I1107 23:09:28.644507   24286 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:28.644857   24286 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:28.645039   24286 main.go:141] libmachine: (functional-277453) Calling .GetState
I1107 23:09:28.646927   24286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1107 23:09:28.646963   24286 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:09:28.666143   24286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46543
I1107 23:09:28.666569   24286 main.go:141] libmachine: () Calling .GetVersion
I1107 23:09:28.667133   24286 main.go:141] libmachine: Using API Version  1
I1107 23:09:28.667158   24286 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:09:28.667473   24286 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:09:28.667676   24286 main.go:141] libmachine: (functional-277453) Calling .DriverName
I1107 23:09:28.667902   24286 ssh_runner.go:195] Run: systemctl --version
I1107 23:09:28.667933   24286 main.go:141] libmachine: (functional-277453) Calling .GetSSHHostname
I1107 23:09:28.671032   24286 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:28.671484   24286 main.go:141] libmachine: (functional-277453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:de:93", ip: ""} in network mk-functional-277453: {Iface:virbr1 ExpiryTime:2023-11-08 00:06:31 +0000 UTC Type:0 Mac:52:54:00:5f:de:93 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-277453 Clientid:01:52:54:00:5f:de:93}
I1107 23:09:28.671515   24286 main.go:141] libmachine: (functional-277453) DBG | domain functional-277453 has defined IP address 192.168.39.2 and MAC address 52:54:00:5f:de:93 in network mk-functional-277453
I1107 23:09:28.671686   24286 main.go:141] libmachine: (functional-277453) Calling .GetSSHPort
I1107 23:09:28.671857   24286 main.go:141] libmachine: (functional-277453) Calling .GetSSHKeyPath
I1107 23:09:28.672037   24286 main.go:141] libmachine: (functional-277453) Calling .GetSSHUsername
I1107 23:09:28.672180   24286 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/functional-277453/id_rsa Username:docker}
I1107 23:09:28.768759   24286 build_images.go:151] Building image from path: /tmp/build.1106893883.tar
I1107 23:09:28.768851   24286 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1107 23:09:28.779435   24286 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1106893883.tar
I1107 23:09:28.784656   24286 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1106893883.tar: stat -c "%s %y" /var/lib/minikube/build/build.1106893883.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1106893883.tar': No such file or directory
I1107 23:09:28.784696   24286 ssh_runner.go:362] scp /tmp/build.1106893883.tar --> /var/lib/minikube/build/build.1106893883.tar (3072 bytes)
I1107 23:09:28.814329   24286 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1106893883
I1107 23:09:28.825397   24286 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1106893883 -xf /var/lib/minikube/build/build.1106893883.tar
I1107 23:09:28.842213   24286 docker.go:346] Building image: /var/lib/minikube/build/build.1106893883
I1107 23:09:28.842298   24286 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-277453 /var/lib/minikube/build/build.1106893883
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1107 23:09:31.424233   24286 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-277453 /var/lib/minikube/build/build.1106893883: (2.581913944s)
I1107 23:09:31.424302   24286 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1106893883
I1107 23:09:31.435055   24286 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1106893883.tar
I1107 23:09:31.445398   24286 build_images.go:207] Built localhost/my-image:functional-277453 from /tmp/build.1106893883.tar
I1107 23:09:31.445431   24286 build_images.go:123] succeeded building to: functional-277453
I1107 23:09:31.445435   24286 build_images.go:124] failed building to: 
I1107 23:09:31.445463   24286 main.go:141] libmachine: Making call to close driver server
I1107 23:09:31.445484   24286 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:31.445748   24286 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:31.445771   24286 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:09:31.445772   24286 main.go:141] libmachine: (functional-277453) DBG | Closing plugin on server side
I1107 23:09:31.445789   24286 main.go:141] libmachine: Making call to close driver server
I1107 23:09:31.445801   24286 main.go:141] libmachine: (functional-277453) Calling .Close
I1107 23:09:31.446028   24286 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:09:31.446049   24286 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:09:31.446050   24286 main.go:141] libmachine: (functional-277453) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.443191117s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-277453
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-277453 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-277453 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-9gmmx" [b067c0bc-efa1-4c7b-bffe-88c2e6a0b389] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-9gmmx" [b067c0bc-efa1-4c7b-bffe-88c2e6a0b389] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.018034049s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image load --daemon gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr
E1107 23:09:00.927681   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 image load --daemon gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr: (4.00466954s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image load --daemon gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 image load --daemon gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr: (2.397424061s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.241878401s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-277453
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image load --daemon gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 image load --daemon gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr: (4.28704845s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 service list -o json
functional_test.go:1493: Took "274.761541ms" to run "out/minikube-linux-amd64 -p functional-277453 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.2:32223
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.2:32223
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image save gcr.io/google-containers/addon-resizer:functional-277453 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 image save gcr.io/google-containers/addon-resizer:functional-277453 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.24041566s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "334.381416ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "59.609472ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdany-port589101540/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699398551891207447" to /tmp/TestFunctionalparallelMountCmdany-port589101540/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699398551891207447" to /tmp/TestFunctionalparallelMountCmdany-port589101540/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699398551891207447" to /tmp/TestFunctionalparallelMountCmdany-port589101540/001/test-1699398551891207447
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.768463ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 23:09 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 23:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 23:09 test-1699398551891207447
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh cat /mount-9p/test-1699398551891207447
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-277453 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [88b8c5c6-d990-4a4b-aaac-9cfef14d4ebb] Pending
helpers_test.go:344: "busybox-mount" [88b8c5c6-d990-4a4b-aaac-9cfef14d4ebb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [88b8c5c6-d990-4a4b-aaac-9cfef14d4ebb] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [88b8c5c6-d990-4a4b-aaac-9cfef14d4ebb] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.014719516s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-277453 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdany-port589101540/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.82s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "243.454211ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "72.681191ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image rm gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (3.009116959s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-277453
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 image save --daemon gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-277453 image save --daemon gcr.io/google-containers/addon-resizer:functional-277453 --alsologtostderr: (1.690094973s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-277453
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdspecific-port3238142208/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (210.257353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdspecific-port3238142208/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-277453 ssh "sudo umount -f /mount-9p": exit status 1 (209.116455ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-277453 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdspecific-port3238142208/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878887135/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878887135/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878887135/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-277453 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-277453 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878887135/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878887135/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-277453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878887135/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-277453
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-277453
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-277453
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (321.15s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-826760 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-826760 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m50.712175849s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-826760 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-826760 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.063568334s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-826760 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-826760 addons enable gvisor: (6.900816186s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [c7a9812b-5930-4f43-8b7f-c388a6a4cc08] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.024253463s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-826760 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [3f9d3120-cb83-48d7-9e6d-7d6519e07b94] Pending
helpers_test.go:344: "nginx-gvisor" [3f9d3120-cb83-48d7-9e6d-7d6519e07b94] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [3f9d3120-cb83-48d7-9e6d-7d6519e07b94] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 13.026443542s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-826760
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-826760: (1m32.21060844s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-826760 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1107 23:42:29.813829   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-826760 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (59.814125738s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [c7a9812b-5930-4f43-8b7f-c388a6a4cc08] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.02489941s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [3f9d3120-cb83-48d7-9e6d-7d6519e07b94] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.01617429s
helpers_test.go:175: Cleaning up "gvisor-826760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-826760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-826760: (1.051908317s)
--- PASS: TestGvisorAddon (321.15s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (54.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-392196 --driver=kvm2 
E1107 23:10:02.369444   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-392196 --driver=kvm2 : (54.383888022s)
--- PASS: TestImageBuild/serial/Setup (54.38s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-392196
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-392196: (1.910655064s)
--- PASS: TestImageBuild/serial/NormalBuild (1.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-392196
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-392196: (1.373997222s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-392196
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.42s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-392196
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (74.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-367854 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1107 23:11:24.292244   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-367854 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m14.263008139s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (74.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-367854 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-367854 addons enable ingress --alsologtostderr -v=5: (13.377202267s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-367854 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-367854 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-367854 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.740550926s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-367854 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-367854 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8de33f3c-7a26-4e27-b58d-3378d6c260e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8de33f3c-7a26-4e27-b58d-3378d6c260e1] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.021673365s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-367854 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-367854 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-367854 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.57
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-367854 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-367854 addons disable ingress-dns --alsologtostderr -v=1: (1.866189164s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-367854 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-367854 addons disable ingress --alsologtostderr -v=1: (7.59961251s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.39s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-900182 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1107 23:13:40.443309   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:13:57.772167   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:13:57.777431   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:13:57.787754   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:13:57.808052   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:13:57.848386   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:13:57.928758   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:13:58.089193   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:13:58.409809   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:13:59.050788   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:14:00.331113   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:14:02.892092   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:14:08.013028   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:14:08.133288   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-900182 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m4.142356171s)
--- PASS: TestJSONOutput/start/Command (64.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-900182 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-900182 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-900182 --output=json --user=testUser
E1107 23:14:18.253904   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-900182 --output=json --user=testUser: (13.112534685s)
--- PASS: TestJSONOutput/stop/Command (13.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-548652 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-548652 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.686995ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"59893a0c-56dd-420f-8a7c-bef8081a7680","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-548652] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4f9cd66-9aa6-4138-abe2-ee61c3d96616","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"2c4195fa-da28-48be-a813-ccef38000d55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"67a2be4b-aad9-40a6-b343-593f9dff0ba9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig"}}
	{"specversion":"1.0","id":"1a41f828-2628-43c0-98f4-c87be9068d8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube"}}
	{"specversion":"1.0","id":"09776e3b-ff9d-48aa-802e-94413d8cb172","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"07158f25-77cc-4e29-90f5-b4d9c7ea9843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"30cbf29f-a167-4bbb-9058-2d93c126da02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-548652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-548652
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (103.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-297376 --driver=kvm2 
E1107 23:14:38.734261   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:15:19.694935   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-297376 --driver=kvm2 : (52.125270165s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-299276 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-299276 --driver=kvm2 : (48.608291173s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-297376
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-299276
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-299276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-299276
helpers_test.go:175: Cleaning up "first-297376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-297376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-297376: (1.000841162s)
--- PASS: TestMinikubeProfile (103.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-799185 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-799185 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.14938674s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-799185 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-799185 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-816062 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1107 23:16:41.615595   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-816062 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.30715474s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816062 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816062 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-799185 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816062 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816062 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (11.42s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-816062
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-816062: (11.422504778s)
--- PASS: TestMountStart/serial/Stop (11.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-816062
E1107 23:17:29.814382   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:29.819711   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:29.830017   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:29.850335   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:29.890664   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:29.971039   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:30.131447   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:30.452102   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:31.093159   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:32.373734   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:34.934585   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:17:40.055763   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-816062: (22.064727574s)
--- PASS: TestMountStart/serial/RestartStopped (23.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816062 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816062 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206924 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1107 23:17:50.296253   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:18:10.776534   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:18:40.443433   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:18:51.737177   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:18:57.772752   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:19:25.455873   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-206924 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m5.816785133s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-206924 -- rollout status deployment/busybox: (3.19660597s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-ll9bc -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-rr5dn -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-ll9bc -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-rr5dn -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-ll9bc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-rr5dn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-ll9bc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-ll9bc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-rr5dn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206924 -- exec busybox-5bc68d56bd-rr5dn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-206924 -v 3 --alsologtostderr
E1107 23:20:13.658967   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-206924 -v 3 --alsologtostderr: (49.489771219s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp testdata/cp-test.txt multinode-206924:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1362071559/001/cp-test_multinode-206924.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924:/home/docker/cp-test.txt multinode-206924-m02:/home/docker/cp-test_multinode-206924_multinode-206924-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m02 "sudo cat /home/docker/cp-test_multinode-206924_multinode-206924-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924:/home/docker/cp-test.txt multinode-206924-m03:/home/docker/cp-test_multinode-206924_multinode-206924-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m03 "sudo cat /home/docker/cp-test_multinode-206924_multinode-206924-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp testdata/cp-test.txt multinode-206924-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1362071559/001/cp-test_multinode-206924-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924-m02:/home/docker/cp-test.txt multinode-206924:/home/docker/cp-test_multinode-206924-m02_multinode-206924.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924 "sudo cat /home/docker/cp-test_multinode-206924-m02_multinode-206924.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924-m02:/home/docker/cp-test.txt multinode-206924-m03:/home/docker/cp-test_multinode-206924-m02_multinode-206924-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m03 "sudo cat /home/docker/cp-test_multinode-206924-m02_multinode-206924-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp testdata/cp-test.txt multinode-206924-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1362071559/001/cp-test_multinode-206924-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924-m03:/home/docker/cp-test.txt multinode-206924:/home/docker/cp-test_multinode-206924-m03_multinode-206924.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924 "sudo cat /home/docker/cp-test_multinode-206924-m03_multinode-206924.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 cp multinode-206924-m03:/home/docker/cp-test.txt multinode-206924-m02:/home/docker/cp-test_multinode-206924-m03_multinode-206924-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 ssh -n multinode-206924-m02 "sudo cat /home/docker/cp-test_multinode-206924-m03_multinode-206924-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-206924 node stop m03: (3.094233717s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-206924 status: exit status 7 (453.546019ms)

                                                
                                                
-- stdout --
	multinode-206924
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-206924-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-206924-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-206924 status --alsologtostderr: exit status 7 (467.133313ms)

                                                
                                                
-- stdout --
	multinode-206924
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-206924-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-206924-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:21:00.900613   31833 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:21:00.900792   31833 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:21:00.900805   31833 out.go:309] Setting ErrFile to fd 2...
	I1107 23:21:00.900813   31833 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:21:00.901040   31833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
	I1107 23:21:00.901212   31833 out.go:303] Setting JSON to false
	I1107 23:21:00.901242   31833 mustload.go:65] Loading cluster: multinode-206924
	I1107 23:21:00.901302   31833 notify.go:220] Checking for updates...
	I1107 23:21:00.901862   31833 config.go:182] Loaded profile config "multinode-206924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:21:00.901885   31833 status.go:255] checking status of multinode-206924 ...
	I1107 23:21:00.902340   31833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:21:00.902389   31833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:21:00.923882   31833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I1107 23:21:00.924314   31833 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:21:00.924891   31833 main.go:141] libmachine: Using API Version  1
	I1107 23:21:00.924918   31833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:21:00.925268   31833 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:21:00.925515   31833 main.go:141] libmachine: (multinode-206924) Calling .GetState
	I1107 23:21:00.927214   31833 status.go:330] multinode-206924 host status = "Running" (err=<nil>)
	I1107 23:21:00.927250   31833 host.go:66] Checking if "multinode-206924" exists ...
	I1107 23:21:00.927559   31833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:21:00.927603   31833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:21:00.942807   31833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I1107 23:21:00.943241   31833 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:21:00.943731   31833 main.go:141] libmachine: Using API Version  1
	I1107 23:21:00.943760   31833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:21:00.944067   31833 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:21:00.944238   31833 main.go:141] libmachine: (multinode-206924) Calling .GetIP
	I1107 23:21:00.946867   31833 main.go:141] libmachine: (multinode-206924) DBG | domain multinode-206924 has defined MAC address 52:54:00:c4:81:eb in network mk-multinode-206924
	I1107 23:21:00.947299   31833 main.go:141] libmachine: (multinode-206924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:81:eb", ip: ""} in network mk-multinode-206924: {Iface:virbr1 ExpiryTime:2023-11-08 00:18:02 +0000 UTC Type:0 Mac:52:54:00:c4:81:eb Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-206924 Clientid:01:52:54:00:c4:81:eb}
	I1107 23:21:00.947323   31833 main.go:141] libmachine: (multinode-206924) DBG | domain multinode-206924 has defined IP address 192.168.39.14 and MAC address 52:54:00:c4:81:eb in network mk-multinode-206924
	I1107 23:21:00.947471   31833 host.go:66] Checking if "multinode-206924" exists ...
	I1107 23:21:00.947778   31833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:21:00.947813   31833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:21:00.963475   31833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I1107 23:21:00.963863   31833 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:21:00.964287   31833 main.go:141] libmachine: Using API Version  1
	I1107 23:21:00.964310   31833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:21:00.964593   31833 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:21:00.964784   31833 main.go:141] libmachine: (multinode-206924) Calling .DriverName
	I1107 23:21:00.964976   31833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:21:00.965004   31833 main.go:141] libmachine: (multinode-206924) Calling .GetSSHHostname
	I1107 23:21:00.967413   31833 main.go:141] libmachine: (multinode-206924) DBG | domain multinode-206924 has defined MAC address 52:54:00:c4:81:eb in network mk-multinode-206924
	I1107 23:21:00.967829   31833 main.go:141] libmachine: (multinode-206924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:81:eb", ip: ""} in network mk-multinode-206924: {Iface:virbr1 ExpiryTime:2023-11-08 00:18:02 +0000 UTC Type:0 Mac:52:54:00:c4:81:eb Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-206924 Clientid:01:52:54:00:c4:81:eb}
	I1107 23:21:00.967866   31833 main.go:141] libmachine: (multinode-206924) DBG | domain multinode-206924 has defined IP address 192.168.39.14 and MAC address 52:54:00:c4:81:eb in network mk-multinode-206924
	I1107 23:21:00.967990   31833 main.go:141] libmachine: (multinode-206924) Calling .GetSSHPort
	I1107 23:21:00.968167   31833 main.go:141] libmachine: (multinode-206924) Calling .GetSSHKeyPath
	I1107 23:21:00.968320   31833 main.go:141] libmachine: (multinode-206924) Calling .GetSSHUsername
	I1107 23:21:00.968486   31833 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/multinode-206924/id_rsa Username:docker}
	I1107 23:21:01.053159   31833 ssh_runner.go:195] Run: systemctl --version
	I1107 23:21:01.063951   31833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:21:01.080175   31833 kubeconfig.go:92] found "multinode-206924" server: "https://192.168.39.14:8443"
	I1107 23:21:01.080207   31833 api_server.go:166] Checking apiserver status ...
	I1107 23:21:01.080251   31833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:21:01.098332   31833 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1849/cgroup
	I1107 23:21:01.107375   31833 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podca5a08c1edba1aa1b5c83940031373a0/0aa3cfe8de9595c66af9566653ddeaf254468aba4039e3a091e4a6e7fc332589"
	I1107 23:21:01.107447   31833 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podca5a08c1edba1aa1b5c83940031373a0/0aa3cfe8de9595c66af9566653ddeaf254468aba4039e3a091e4a6e7fc332589/freezer.state
	I1107 23:21:01.118189   31833 api_server.go:204] freezer state: "THAWED"
	I1107 23:21:01.118218   31833 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1107 23:21:01.123218   31833 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1107 23:21:01.123240   31833 status.go:421] multinode-206924 apiserver status = Running (err=<nil>)
	I1107 23:21:01.123248   31833 status.go:257] multinode-206924 status: &{Name:multinode-206924 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:21:01.123262   31833 status.go:255] checking status of multinode-206924-m02 ...
	I1107 23:21:01.123569   31833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:21:01.123605   31833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:21:01.138847   31833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35957
	I1107 23:21:01.139221   31833 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:21:01.139623   31833 main.go:141] libmachine: Using API Version  1
	I1107 23:21:01.139644   31833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:21:01.139916   31833 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:21:01.140133   31833 main.go:141] libmachine: (multinode-206924-m02) Calling .GetState
	I1107 23:21:01.141757   31833 status.go:330] multinode-206924-m02 host status = "Running" (err=<nil>)
	I1107 23:21:01.141772   31833 host.go:66] Checking if "multinode-206924-m02" exists ...
	I1107 23:21:01.142072   31833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:21:01.142114   31833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:21:01.157187   31833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46421
	I1107 23:21:01.157593   31833 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:21:01.158033   31833 main.go:141] libmachine: Using API Version  1
	I1107 23:21:01.158059   31833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:21:01.158364   31833 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:21:01.158541   31833 main.go:141] libmachine: (multinode-206924-m02) Calling .GetIP
	I1107 23:21:01.161375   31833 main.go:141] libmachine: (multinode-206924-m02) DBG | domain multinode-206924-m02 has defined MAC address 52:54:00:a7:3b:fb in network mk-multinode-206924
	I1107 23:21:01.161803   31833 main.go:141] libmachine: (multinode-206924-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:3b:fb", ip: ""} in network mk-multinode-206924: {Iface:virbr1 ExpiryTime:2023-11-08 00:19:18 +0000 UTC Type:0 Mac:52:54:00:a7:3b:fb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-206924-m02 Clientid:01:52:54:00:a7:3b:fb}
	I1107 23:21:01.161836   31833 main.go:141] libmachine: (multinode-206924-m02) DBG | domain multinode-206924-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:3b:fb in network mk-multinode-206924
	I1107 23:21:01.162002   31833 host.go:66] Checking if "multinode-206924-m02" exists ...
	I1107 23:21:01.162357   31833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:21:01.162398   31833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:21:01.176989   31833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
	I1107 23:21:01.177392   31833 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:21:01.177772   31833 main.go:141] libmachine: Using API Version  1
	I1107 23:21:01.177789   31833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:21:01.178124   31833 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:21:01.178280   31833 main.go:141] libmachine: (multinode-206924-m02) Calling .DriverName
	I1107 23:21:01.178476   31833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:21:01.178499   31833 main.go:141] libmachine: (multinode-206924-m02) Calling .GetSSHHostname
	I1107 23:21:01.181019   31833 main.go:141] libmachine: (multinode-206924-m02) DBG | domain multinode-206924-m02 has defined MAC address 52:54:00:a7:3b:fb in network mk-multinode-206924
	I1107 23:21:01.181424   31833 main.go:141] libmachine: (multinode-206924-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:3b:fb", ip: ""} in network mk-multinode-206924: {Iface:virbr1 ExpiryTime:2023-11-08 00:19:18 +0000 UTC Type:0 Mac:52:54:00:a7:3b:fb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-206924-m02 Clientid:01:52:54:00:a7:3b:fb}
	I1107 23:21:01.181457   31833 main.go:141] libmachine: (multinode-206924-m02) DBG | domain multinode-206924-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:3b:fb in network mk-multinode-206924
	I1107 23:21:01.181620   31833 main.go:141] libmachine: (multinode-206924-m02) Calling .GetSSHPort
	I1107 23:21:01.181786   31833 main.go:141] libmachine: (multinode-206924-m02) Calling .GetSSHKeyPath
	I1107 23:21:01.181958   31833 main.go:141] libmachine: (multinode-206924-m02) Calling .GetSSHUsername
	I1107 23:21:01.182106   31833 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9672/.minikube/machines/multinode-206924-m02/id_rsa Username:docker}
	I1107 23:21:01.276875   31833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:21:01.289801   31833 status.go:257] multinode-206924-m02 status: &{Name:multinode-206924-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:21:01.289852   31833 status.go:255] checking status of multinode-206924-m03 ...
	I1107 23:21:01.290240   31833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:21:01.290290   31833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:21:01.305990   31833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I1107 23:21:01.306462   31833 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:21:01.306945   31833 main.go:141] libmachine: Using API Version  1
	I1107 23:21:01.306970   31833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:21:01.307392   31833 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:21:01.307557   31833 main.go:141] libmachine: (multinode-206924-m03) Calling .GetState
	I1107 23:21:01.309092   31833 status.go:330] multinode-206924-m03 host status = "Stopped" (err=<nil>)
	I1107 23:21:01.309107   31833 status.go:343] host is not running, skipping remaining checks
	I1107 23:21:01.309112   31833 status.go:257] multinode-206924-m03 status: &{Name:multinode-206924-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.02s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-206924 node start m03 --alsologtostderr: (30.437849499s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (184.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-206924
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-206924
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-206924: (27.801806797s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206924 --wait=true -v=8 --alsologtostderr
E1107 23:22:29.814067   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:22:57.499433   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:23:40.443723   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:23:57.771853   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-206924 --wait=true -v=8 --alsologtostderr: (2m36.488080127s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-206924
--- PASS: TestMultiNode/serial/RestartKeepsNodes (184.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-206924 node delete m03: (1.214022712s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 stop
E1107 23:25:03.493490   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-206924 stop: (25.351957127s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-206924 status: exit status 7 (94.308956ms)

                                                
                                                
-- stdout --
	multinode-206924
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-206924-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-206924 status --alsologtostderr: exit status 7 (103.183679ms)

                                                
                                                
-- stdout --
	multinode-206924
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-206924-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:25:04.075922   33261 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:25:04.076167   33261 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:25:04.076175   33261 out.go:309] Setting ErrFile to fd 2...
	I1107 23:25:04.076180   33261 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:25:04.076360   33261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9672/.minikube/bin
	I1107 23:25:04.076522   33261 out.go:303] Setting JSON to false
	I1107 23:25:04.076550   33261 mustload.go:65] Loading cluster: multinode-206924
	I1107 23:25:04.076662   33261 notify.go:220] Checking for updates...
	I1107 23:25:04.076948   33261 config.go:182] Loaded profile config "multinode-206924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 23:25:04.076961   33261 status.go:255] checking status of multinode-206924 ...
	I1107 23:25:04.077365   33261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:25:04.077444   33261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:25:04.095897   33261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I1107 23:25:04.096408   33261 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:25:04.097034   33261 main.go:141] libmachine: Using API Version  1
	I1107 23:25:04.097068   33261 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:25:04.097453   33261 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:25:04.097656   33261 main.go:141] libmachine: (multinode-206924) Calling .GetState
	I1107 23:25:04.099278   33261 status.go:330] multinode-206924 host status = "Stopped" (err=<nil>)
	I1107 23:25:04.099296   33261 status.go:343] host is not running, skipping remaining checks
	I1107 23:25:04.099303   33261 status.go:257] multinode-206924 status: &{Name:multinode-206924 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:25:04.099355   33261 status.go:255] checking status of multinode-206924-m02 ...
	I1107 23:25:04.099681   33261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1107 23:25:04.099726   33261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:25:04.114224   33261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I1107 23:25:04.114673   33261 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:25:04.115151   33261 main.go:141] libmachine: Using API Version  1
	I1107 23:25:04.115172   33261 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:25:04.115472   33261 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:25:04.115642   33261 main.go:141] libmachine: (multinode-206924-m02) Calling .GetState
	I1107 23:25:04.117134   33261 status.go:330] multinode-206924-m02 host status = "Stopped" (err=<nil>)
	I1107 23:25:04.117151   33261 status.go:343] host is not running, skipping remaining checks
	I1107 23:25:04.117156   33261 status.go:257] multinode-206924-m02 status: &{Name:multinode-206924-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (103.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206924 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-206924 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m43.033844363s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206924 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (103.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-206924
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206924-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-206924-m02 --driver=kvm2 : exit status 14 (79.306091ms)

                                                
                                                
-- stdout --
	* [multinode-206924-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-206924-m02' is duplicated with machine name 'multinode-206924-m02' in profile 'multinode-206924'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206924-m03 --driver=kvm2 
E1107 23:27:29.813514   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-206924-m03 --driver=kvm2 : (52.07452874s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-206924
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-206924: exit status 80 (240.822831ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-206924
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-206924-m03 already exists in multinode-206924-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-206924-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-206924-m03: (1.013397935s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.47s)

                                                
                                    
x
+
TestPreload (168.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-808235 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1107 23:28:40.443769   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:28:57.772537   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-808235 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m26.341668106s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-808235 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-808235 image pull gcr.io/k8s-minikube/busybox: (1.338451093s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-808235
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-808235: (13.123958208s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-808235 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1107 23:30:20.816895   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-808235 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m6.190812052s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-808235 image list
helpers_test.go:175: Cleaning up "test-preload-808235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-808235
--- PASS: TestPreload (168.05s)

                                                
                                    
x
+
TestScheduledStopUnix (120.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-112832 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-112832 --memory=2048 --driver=kvm2 : (48.670871433s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-112832 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-112832 -n scheduled-stop-112832
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-112832 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-112832 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-112832 -n scheduled-stop-112832
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-112832
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-112832 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1107 23:32:29.814425   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-112832
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-112832: exit status 7 (85.249429ms)

                                                
                                                
-- stdout --
	scheduled-stop-112832
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-112832 -n scheduled-stop-112832
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-112832 -n scheduled-stop-112832: exit status 7 (77.397617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-112832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-112832
--- PASS: TestScheduledStopUnix (120.52s)

                                                
                                    
x
+
TestSkaffold (140.17s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1782596632 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-230644 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-230644 --memory=2600 --driver=kvm2 : (51.321217414s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1782596632 run --minikube-profile skaffold-230644 --kube-context skaffold-230644 --status-check=true --port-forward=false --interactive=false
E1107 23:33:40.444117   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:33:52.860641   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:33:57.772362   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1782596632 run --minikube-profile skaffold-230644 --kube-context skaffold-230644 --status-check=true --port-forward=false --interactive=false: (1m16.990973964s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-66c55ccdd6-9lq6f" [888452a5-9769-463f-93f9-c913de67d533] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.01811365s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-757c89cb54-gnn7h" [5b28b0c7-5154-4845-830b-5f7a862c572e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.017097888s
helpers_test.go:175: Cleaning up "skaffold-230644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-230644
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-230644: (1.182942061s)
--- PASS: TestSkaffold (140.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (182.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.2860314948.exe start -p running-upgrade-405281 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.2860314948.exe start -p running-upgrade-405281 --memory=2200 --vm-driver=kvm2 : (1m44.586915368s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-405281 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-405281 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m15.981029732s)
helpers_test.go:175: Cleaning up "running-upgrade-405281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-405281
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-405281: (1.17681558s)
--- PASS: TestRunningBinaryUpgrade (182.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (202.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-471932 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-471932 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m14.379633951s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-471932
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-471932: (4.508110583s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-471932 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-471932 status --format={{.Host}}: exit status 7 (113.498661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-471932 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-471932 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (44.934403497s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-471932 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-471932 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-471932 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (104.497887ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-471932] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-471932
	    minikube start -p kubernetes-upgrade-471932 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4719322 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-471932 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-471932 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1107 23:37:29.813875   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-471932 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (1m16.620999716s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-471932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-471932
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-471932: (1.351234503s)
--- PASS: TestKubernetesUpgrade (202.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (199.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.787537536.exe start -p stopped-upgrade-584667 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.787537536.exe start -p stopped-upgrade-584667 --memory=2200 --vm-driver=kvm2 : (1m47.463104537s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.787537536.exe -p stopped-upgrade-584667 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.787537536.exe -p stopped-upgrade-584667 stop: (13.095914289s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-584667 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-584667 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m19.058062842s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (199.62s)

                                                
                                    
x
+
TestPause/serial/Start (68.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-812629 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-812629 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m8.948381717s)
--- PASS: TestPause/serial/Start (68.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-812629 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-812629 --alsologtostderr -v=1 --driver=kvm2 : (45.159660082s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-433713 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-433713 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (85.103143ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-433713] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (57.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-433713 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-433713 --driver=kvm2 : (57.222297231s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-433713 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (57.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-584667
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-584667: (1.574034571s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.57s)

                                                
                                    
x
+
TestPause/serial/Pause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-812629 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.58s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-812629 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-812629 --output=json --layout=cluster: exit status 2 (256.1199ms)

                                                
                                                
-- stdout --
	{"Name":"pause-812629","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-812629","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-812629 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-812629 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-812629 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-812629 --alsologtostderr -v=5: (1.028401669s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (70.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-433713 --no-kubernetes --driver=kvm2 
E1107 23:38:57.772751   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-433713 --no-kubernetes --driver=kvm2 : (1m8.682629895s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-433713 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-433713 status -o json: exit status 2 (317.494122ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-433713","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-433713
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-433713: (1.167623192s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (70.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-433713 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-433713 --no-kubernetes --driver=kvm2 : (45.205277551s)
--- PASS: TestNoKubernetes/serial/Start (45.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-433713 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-433713 "sudo systemctl is-active --quiet service kubelet": exit status 1 (232.605268ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-433713
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-433713: (2.116786338s)
--- PASS: TestNoKubernetes/serial/Stop (2.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-433713 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-433713 --driver=kvm2 : (45.329888829s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-433713 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-433713 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.847696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1107 23:42:24.277474   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m32.598119288s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m36.658724886s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-627949 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-627949 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kcmcm" [47ed33ec-84ac-499c-b610-46baaba4d856] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kcmcm" [47ed33ec-84ac-499c-b610-46baaba4d856] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.016635203s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-627949 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E1107 23:43:40.444195   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m40.380874462s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m30.457465419s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-clfsm" [a345269e-ea9f-417a-9846-9fb9d17e1921] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.024747579s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-627949 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-627949 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rfrg4" [11c6e478-9196-4d32-a983-057812357687] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rfrg4" [11c6e478-9196-4d32-a983-057812357687] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.052966756s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-627949 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m21.565743314s)
--- PASS: TestNetworkPlugins/group/false/Start (81.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (111.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1107 23:45:08.117763   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m51.771240696s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (111.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4rnst" [5307cc9e-c9ad-4ffc-a87d-5673facf9161] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.02521908s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-627949 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-627949 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tfx8c" [3fbcb0e7-f2be-4ec0-b6f0-696628b6dea5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tfx8c" [3fbcb0e7-f2be-4ec0-b6f0-696628b6dea5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.012688435s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-627949 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-627949 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zc8qs" [9e0a190b-cdce-482e-b8cd-30d2fb6a8c2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zc8qs" [9e0a190b-cdce-482e-b8cd-30d2fb6a8c2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.013201422s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-627949 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1107 23:45:34.705522   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
E1107 23:45:34.710885   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
E1107 23:45:34.721556   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1107 23:45:34.741750   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
E1107 23:45:34.782066   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
E1107 23:45:34.862402   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-627949 exec deployment/netcat -- nslookup kubernetes.default
E1107 23:45:35.344242   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m33.461084182s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (146.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1107 23:46:15.670961   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (2m26.529893025s)
--- PASS: TestNetworkPlugins/group/bridge/Start (146.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-627949 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-627949 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9cfdr" [741e8198-bd28-4bd1-aeaa-ade2147f97a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9cfdr" [741e8198-bd28-4bd1-aeaa-ade2147f97a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.013554645s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-627949 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-627949 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (112.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-627949 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m52.981795458s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (112.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-627949 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ds72b" [4b119f54-7aa2-4844-8571-4a2b8c9bb872] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1107 23:46:56.631631   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ds72b" [4b119f54-7aa2-4844-8571-4a2b8c9bb872] Running
E1107 23:47:00.817453   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.014355286s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-627949 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-729146 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-729146 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m29.816438359s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-k5nqz" [ac313909-7051-40ab-ba11-28edc165e0cf] Running
E1107 23:47:29.813659   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.025781739s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-627949 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-627949 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gt7b4" [b6cb9041-6d2e-403b-8929-fa7d8966cc80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gt7b4" [b6cb9041-6d2e-403b-8929-fa7d8966cc80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.018754876s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-627949 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-883054 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
E1107 23:48:18.552784   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-883054 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (1m33.263298056s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-627949 "pgrep -a kubelet"
E1107 23:48:22.561852   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:48:22.567126   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:48:22.577452   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:48:22.598170   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-627949 replace --force -f testdata/netcat-deployment.yaml
E1107 23:48:22.638763   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:48:22.719146   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:48:22.879915   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x8jwh" [d483b423-52d2-4ee3-8deb-c1b36dc31a0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1107 23:48:23.200349   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:48:23.840907   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:48:25.122160   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:48:27.682333   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-x8jwh" [d483b423-52d2-4ee3-8deb-c1b36dc31a0b] Running
E1107 23:48:32.803076   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.010781838s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-627949 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-627949 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-627949 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nddxb" [982103b7-9eb0-4b93-9a0b-a681cb586532] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nddxb" [982103b7-9eb0-4b93-9a0b-a681cb586532] Running
E1107 23:48:57.772539   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.015661756s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-692502 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-692502 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (1m17.055815793s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-627949 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-627949 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.21s)
E1107 23:54:36.858152   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:54:40.433835   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:54:44.887229   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:54:48.612180   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:54:53.631799   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:53.637128   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:53.647423   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:53.667761   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:53.708062   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:53.788431   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:53.948855   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:54.269684   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:54.910355   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:56.191061   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:54:58.751742   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:55:03.872682   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:55:08.195086   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:55:11.673708   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:55:14.113294   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:55:16.561527   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:55:23.252546   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:55:34.594354   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/old-k8s-version-729146/client.crt: no such file or directory
E1107 23:55:34.703698   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-385734 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1107 23:49:20.927771   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:20.933090   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:20.943457   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:20.963760   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:21.004771   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:21.085091   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:21.245321   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:21.565633   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:22.207939   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:23.488794   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:26.049276   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:49:31.169478   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-385734 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (1m12.965952647s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-883054 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [60aa68a7-7fa3-4dc2-9e38-7500575ac4a7] Pending
helpers_test.go:344: "busybox" [60aa68a7-7fa3-4dc2-9e38-7500575ac4a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1107 23:49:40.433401   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
E1107 23:49:41.410423   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
helpers_test.go:344: "busybox" [60aa68a7-7fa3-4dc2-9e38-7500575ac4a7] Running
E1107 23:49:44.484975   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.04162092s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-883054 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-883054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-883054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.147539958s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-883054 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-883054 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-883054 --alsologtostderr -v=3: (13.16099072s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-729146 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eb93a790-81ed-4b23-9d67-a30a387496f2] Pending
helpers_test.go:344: "busybox" [eb93a790-81ed-4b23-9d67-a30a387496f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eb93a790-81ed-4b23-9d67-a30a387496f2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.03823198s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-729146 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-729146 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1107 23:50:01.890943   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-729146 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-729146 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-729146 --alsologtostderr -v=3: (13.146559965s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-883054 -n no-preload-883054
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-883054 -n no-preload-883054: exit status 7 (96.005674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-883054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-883054 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-883054 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (5m29.334058541s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-883054 -n no-preload-883054
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-692502 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [57f6ca51-20e1-4884-b09d-5faf2a84907b] Pending
helpers_test.go:344: "busybox" [57f6ca51-20e1-4884-b09d-5faf2a84907b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [57f6ca51-20e1-4884-b09d-5faf2a84907b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.034361767s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-692502 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-729146 -n old-k8s-version-729146
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-729146 -n old-k8s-version-729146: exit status 7 (86.595919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-729146 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (102.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-729146 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1107 23:50:16.561619   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:16.566940   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:16.579194   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:16.599547   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:16.639891   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:16.720267   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:16.880734   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:17.201324   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:17.841815   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:19.122980   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-729146 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (1m42.486093408s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-729146 -n old-k8s-version-729146
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (102.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-692502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-692502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.208543772s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-692502 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-692502 --alsologtostderr -v=3
E1107 23:50:21.683967   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:23.252226   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:23.257573   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:23.267892   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:23.288232   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:23.328576   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:23.408898   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:23.569199   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:23.889337   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:24.530342   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:25.811062   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:50:26.804999   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:50:28.371850   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-692502 --alsologtostderr -v=3: (13.164812266s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-385734 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8a065233-cc7a-4974-bbe8-8cef22b9c688] Pending
helpers_test.go:344: "busybox" [8a065233-cc7a-4974-bbe8-8cef22b9c688] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1107 23:50:32.861207   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:50:33.492547   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
helpers_test.go:344: "busybox" [8a065233-cc7a-4974-bbe8-8cef22b9c688] Running
E1107 23:50:34.703462   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
E1107 23:50:37.045856   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.039097456s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-385734 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-692502 -n embed-certs-692502
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-692502 -n embed-certs-692502: exit status 7 (83.67773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-692502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (321.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-692502 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-692502 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (5m20.934638996s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-692502 -n embed-certs-692502
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (321.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-385734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-385734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048793699s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-385734 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-385734 --alsologtostderr -v=3
E1107 23:50:42.851100   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
E1107 23:50:43.733380   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-385734 --alsologtostderr -v=3: (13.134230714s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734: exit status 7 (86.028952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-385734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (328.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-385734 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1107 23:50:57.526404   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:51:02.393753   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/gvisor-826760/client.crt: no such file or directory
E1107 23:51:04.213639   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:51:06.405455   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:51:21.827951   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:21.833264   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:21.843606   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:21.863948   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:21.904877   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:21.985264   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:22.145774   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:22.466335   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:23.106576   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:24.387290   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:26.948122   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:32.069400   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:38.486818   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:51:42.310214   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:51:45.174251   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:51:53.014263   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:53.019580   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:53.029910   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:53.050129   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:53.090420   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:53.170748   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:53.331533   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:53.652778   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:54.293176   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:55.573891   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:51:58.134508   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-385734 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (5m28.618739435s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (328.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-f58bn" [189030c6-9969-444e-bc27-4e112455c491] Running
E1107 23:52:02.791044   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:52:03.255375   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022298142s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-f58bn" [189030c6-9969-444e-bc27-4e112455c491] Running
E1107 23:52:04.771263   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010548113s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-729146 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-729146 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-729146 -n old-k8s-version-729146
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-729146 -n old-k8s-version-729146: exit status 2 (288.859634ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-729146 -n old-k8s-version-729146
E1107 23:52:13.496146   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-729146 -n old-k8s-version-729146: exit status 2 (292.284359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-729146 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-729146 -n old-k8s-version-729146
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-729146 -n old-k8s-version-729146
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (69.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-085027 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1107 23:52:27.831361   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:27.836714   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:27.847121   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:27.867483   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:27.907860   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:27.988229   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:28.148679   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:28.468960   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:29.109730   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:29.813342   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/ingress-addon-legacy-367854/client.crt: no such file or directory
E1107 23:52:30.390725   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:32.951532   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:33.976611   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:52:38.072462   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:52:43.751585   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:52:48.312722   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:53:00.407073   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
E1107 23:53:07.094468   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/custom-flannel-627949/client.crt: no such file or directory
E1107 23:53:08.793090   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:53:14.937582   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/enable-default-cni-627949/client.crt: no such file or directory
E1107 23:53:22.561845   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:53:22.964487   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:22.969765   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:22.980039   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:23.000338   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:23.040702   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:23.121087   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:23.281649   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:23.601826   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:24.242822   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:25.523619   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-085027 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (1m9.744465623s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (69.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-085027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-085027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.183388134s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-085027 --alsologtostderr -v=3
E1107 23:53:28.084707   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:33.204938   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-085027 --alsologtostderr -v=3: (8.132307923s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-085027 -n newest-cni-085027
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-085027 -n newest-cni-085027: exit status 7 (86.399506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-085027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (49.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-085027 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1107 23:53:40.443669   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/addons-625969/client.crt: no such file or directory
E1107 23:53:43.445832   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:53:46.273140   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:46.278510   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:46.288806   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:46.309094   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:46.349467   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:46.429834   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:46.590268   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:46.910912   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:47.551847   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:48.832842   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:49.753286   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/flannel-627949/client.crt: no such file or directory
E1107 23:53:50.246333   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/auto-627949/client.crt: no such file or directory
E1107 23:53:51.392975   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:56.514089   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:53:57.772056   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/functional-277453/client.crt: no such file or directory
E1107 23:54:03.926127   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
E1107 23:54:05.672353   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/false-627949/client.crt: no such file or directory
E1107 23:54:06.754758   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
E1107 23:54:20.927737   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kindnet-627949/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-085027 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (48.999853179s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-085027 -n newest-cni-085027
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (49.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-085027 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-085027 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-085027 -n newest-cni-085027
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-085027 -n newest-cni-085027: exit status 2 (260.638423ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-085027 -n newest-cni-085027
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-085027 -n newest-cni-085027: exit status 2 (255.768801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-085027 --alsologtostderr -v=1
E1107 23:54:27.234889   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-085027 -n newest-cni-085027
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-085027 -n newest-cni-085027
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-n5mgw" [750a040a-5746-4319-a61e-9936d3e839a9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018296045s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-n5mgw" [750a040a-5746-4319-a61e-9936d3e839a9] Running
E1107 23:55:44.247531   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/calico-627949/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011115557s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-883054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-883054 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-883054 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-883054 -n no-preload-883054
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-883054 -n no-preload-883054: exit status 2 (270.324214ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-883054 -n no-preload-883054
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-883054 -n no-preload-883054: exit status 2 (256.927861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-883054 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-883054 -n no-preload-883054
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-883054 -n no-preload-883054
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cq6fr" [103283f0-196c-4b86-a77d-532cd15c7562] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019945131s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cq6fr" [103283f0-196c-4b86-a77d-532cd15c7562] Running
E1107 23:56:03.477921   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/skaffold-230644/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011675335s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-692502 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-692502 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-692502 --alsologtostderr -v=1
E1107 23:56:06.807554   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/bridge-627949/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-692502 -n embed-certs-692502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-692502 -n embed-certs-692502: exit status 2 (273.315169ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-692502 -n embed-certs-692502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-692502 -n embed-certs-692502: exit status 2 (268.180151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-692502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-692502 -n embed-certs-692502
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-692502 -n embed-certs-692502
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c8nvm" [b23772d8-f4ec-4d8c-a1f7-bf97ba37a7c9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018776064s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c8nvm" [b23772d8-f4ec-4d8c-a1f7-bf97ba37a7c9] Running
E1107 23:56:30.115669   16866 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9672/.minikube/profiles/kubenet-627949/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011378358s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-385734 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-385734 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-385734 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734: exit status 2 (245.971478ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734: exit status 2 (246.055343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-385734 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-385734 -n default-k8s-diff-port-385734
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.45s)

                                                
                                    

Test skip (31/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-627949 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-627949" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-627949

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-627949" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627949"

                                                
                                                
----------------------- debugLogs end: cilium-627949 [took: 4.176433943s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-627949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-627949
--- SKIP: TestNetworkPlugins/group/cilium (4.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-703291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-703291
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard