Test Report: KVM_Linux_containerd 17909

                    
                      32b2a5fece3308ee5469fc0bf0007c33b5e4c18a:2024-01-15:32704
                    
                

Test fail (3/337)

Order failed test Duration
39 TestAddons/parallel/Ingress 23.77
172 TestHA/serial/StopSecondaryNode 81.78
174 TestHA/serial/RestartSecondaryNode 56.92
x
+
TestAddons/parallel/Ingress (23.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-974059 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-974059 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-974059 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c81ffef8-7d01-4bd7-ae87-0cf3035b5083] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c81ffef8-7d01-4bd7-ae87-0cf3035b5083] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004803844s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-974059 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.115
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-974059 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (320.971667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 02:51:04.541893   18043 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:51:04.542050   18043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:51:04.542061   18043 out.go:309] Setting ErrFile to fd 2...
	I0115 02:51:04.542069   18043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:51:04.542263   18043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 02:51:04.542531   18043 mustload.go:65] Loading cluster: addons-974059
	I0115 02:51:04.542871   18043 config.go:182] Loaded profile config "addons-974059": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:51:04.542896   18043 addons.go:597] checking whether the cluster is paused
	I0115 02:51:04.542998   18043 config.go:182] Loaded profile config "addons-974059": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:51:04.543013   18043 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:51:04.543421   18043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:51:04.543475   18043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:51:04.559221   18043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
	I0115 02:51:04.559661   18043 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:51:04.560179   18043 main.go:141] libmachine: Using API Version  1
	I0115 02:51:04.560195   18043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:51:04.560561   18043 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:51:04.560755   18043 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:51:04.562263   18043 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:51:04.562461   18043 ssh_runner.go:195] Run: systemctl --version
	I0115 02:51:04.562482   18043 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:51:04.564605   18043 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:51:04.564969   18043 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:51:04.565007   18043 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:51:04.565126   18043 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:51:04.565256   18043 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:51:04.565378   18043 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:51:04.565486   18043 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:51:04.661885   18043 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 02:51:04.661950   18043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 02:51:04.742515   18043 cri.go:89] found id: "651aaef8a44a84a02a9045c2565f123bad9f45ce2342696ce00375249d282919"
	I0115 02:51:04.742539   18043 cri.go:89] found id: "e64a3e28cafb9f576bb93cf7ed88c11c83b86f4ed00ab96b94db87fa3870d133"
	I0115 02:51:04.742544   18043 cri.go:89] found id: "243dc64d1a3af578f1644e440f047e69fb0428229ac3d20698e04ae1fde09b1f"
	I0115 02:51:04.742548   18043 cri.go:89] found id: "e03f6862243608b3fc34c7addea06c86ebc8aebc6dcce3df79d6eba6a2e8f066"
	I0115 02:51:04.742552   18043 cri.go:89] found id: "4e676ddae82e04169a2622224bec5cc6f002644787ce0301d814a8d4197c0308"
	I0115 02:51:04.742560   18043 cri.go:89] found id: "1057fe670ce0bc6466d5a6f0a0b29edd119004ea97c55d7e25d59a2096f98260"
	I0115 02:51:04.742566   18043 cri.go:89] found id: "d390a03d34eead7667d56219b08905278e7ed7f56ec5f4c7ecd6c6e6fb0da398"
	I0115 02:51:04.742572   18043 cri.go:89] found id: "bae5562ac614519a8a767489b0eeac5f57f76a5f8dcd880f67b35256a93d6f7d"
	I0115 02:51:04.742579   18043 cri.go:89] found id: "fb3c158429e5185648047a46de5c8674935a9af0ec4c24ac882c08edb713f1b2"
	I0115 02:51:04.742588   18043 cri.go:89] found id: "0a4842b7d69c8b1a3b0b9d302906bdabe63faefcd917dcd5f36e5123788e9053"
	I0115 02:51:04.742594   18043 cri.go:89] found id: ""
	I0115 02:51:04.742665   18043 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0115 02:51:04.798699   18043 main.go:141] libmachine: Making call to close driver server
	I0115 02:51:04.798722   18043 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:51:04.799004   18043 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:51:04.799034   18043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:51:04.799043   18043 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:51:04.801567   18043 out.go:177] 
	W0115 02:51:04.803208   18043 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T02:51:04Z" level=error msg="stat /run/containerd/runc/k8s.io/98be54d5bcdfd3cda2e4c11315b422f73f4b622cbbe794130859cf006a9b3d38: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T02:51:04Z" level=error msg="stat /run/containerd/runc/k8s.io/98be54d5bcdfd3cda2e4c11315b422f73f4b622cbbe794130859cf006a9b3d38: no such file or directory"
	
	W0115 02:51:04.803227   18043 out.go:239] * 
	* 
	W0115 02:51:04.805096   18043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 02:51:04.806623   18043 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:308: failed to disable ingress-dns addon. args "out/minikube-linux-amd64 -p addons-974059 addons disable ingress-dns --alsologtostderr -v=1" : exit status 11
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-974059 addons disable ingress --alsologtostderr -v=1: (8.039148894s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-974059 -n addons-974059
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-974059 logs -n 25: (1.237111287s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-041054                                                                     | download-only-041054 | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC | 15 Jan 24 02:46 UTC |
	| delete  | -p download-only-151909                                                                     | download-only-151909 | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC | 15 Jan 24 02:46 UTC |
	| delete  | -p download-only-006146                                                                     | download-only-006146 | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC | 15 Jan 24 02:46 UTC |
	| delete  | -p download-only-041054                                                                     | download-only-041054 | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC | 15 Jan 24 02:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-172971 | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC |                     |
	|         | binary-mirror-172971                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33023                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-172971                                                                     | binary-mirror-172971 | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC | 15 Jan 24 02:46 UTC |
	| addons  | enable dashboard -p                                                                         | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC |                     |
	|         | addons-974059                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC |                     |
	|         | addons-974059                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-974059 --wait=true                                                                | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:46 UTC | 15 Jan 24 02:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:49 UTC | 15 Jan 24 02:49 UTC |
	|         | addons-974059                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-974059 ssh cat                                                                       | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:50 UTC |
	|         | /opt/local-path-provisioner/pvc-1077ad20-ac07-4be2-a7fc-a7cbe9e3db68_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-974059 addons disable                                                                | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:50 UTC |
	|         | -p addons-974059                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:50 UTC |
	|         | -p addons-974059                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-974059 ip                                                                            | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:50 UTC |
	| addons  | addons-974059 addons disable                                                                | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-974059 addons                                                                        | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:50 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-974059 addons disable                                                                | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:51 UTC |
	|         | addons-974059                                                                               |                      |         |         |                     |                     |
	| addons  | addons-974059 addons                                                                        | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:50 UTC | 15 Jan 24 02:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-974059 addons                                                                        | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:51 UTC | 15 Jan 24 02:51 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-974059 ssh curl -s                                                                   | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:51 UTC | 15 Jan 24 02:51 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-974059 ip                                                                            | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:51 UTC | 15 Jan 24 02:51 UTC |
	| addons  | addons-974059 addons disable                                                                | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:51 UTC |                     |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-974059 addons disable                                                                | addons-974059        | jenkins | v1.32.0 | 15 Jan 24 02:51 UTC | 15 Jan 24 02:51 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 02:46:16
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 02:46:16.864123   16000 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:46:16.864268   16000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:46:16.864278   16000 out.go:309] Setting ErrFile to fd 2...
	I0115 02:46:16.864282   16000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:46:16.864482   16000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 02:46:16.865074   16000 out.go:303] Setting JSON to false
	I0115 02:46:16.865845   16000 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1722,"bootTime":1705285055,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 02:46:16.865897   16000 start.go:138] virtualization: kvm guest
	I0115 02:46:16.868322   16000 out.go:177] * [addons-974059] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 02:46:16.869741   16000 out.go:177]   - MINIKUBE_LOCATION=17909
	I0115 02:46:16.871088   16000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 02:46:16.869751   16000 notify.go:220] Checking for updates...
	I0115 02:46:16.873832   16000 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:46:16.875217   16000 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:46:16.876519   16000 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 02:46:16.877751   16000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 02:46:16.879176   16000 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 02:46:16.908776   16000 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 02:46:16.910204   16000 start.go:296] selected driver: kvm2
	I0115 02:46:16.910214   16000 start.go:900] validating driver "kvm2" against <nil>
	I0115 02:46:16.910227   16000 start.go:911] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 02:46:16.910863   16000 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:46:16.910932   16000 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17909-7685/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 02:46:16.923982   16000 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 02:46:16.924062   16000 start_flags.go:308] no existing cluster config was found, will generate one from the flags 
	I0115 02:46:16.924255   16000 start_flags.go:943] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 02:46:16.924311   16000 cni.go:84] Creating CNI manager for ""
	I0115 02:46:16.924336   16000 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 02:46:16.924350   16000 start_flags.go:317] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 02:46:16.924421   16000 start.go:339] cluster config:
	{Name:addons-974059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-974059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs:}
	I0115 02:46:16.924526   16000 iso.go:125] acquiring lock: {Name:mk557eda9a6ce643c635b77cd4c9cb212ca64fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:46:16.926192   16000 out.go:177] * Starting "addons-974059" primary control-plane node in "addons-974059" cluster
	I0115 02:46:16.927447   16000 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:46:16.927483   16000 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0115 02:46:16.927494   16000 cache.go:56] Caching tarball of preloaded images
	I0115 02:46:16.927567   16000 preload.go:173] Found /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 02:46:16.927581   16000 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0115 02:46:16.927877   16000 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/config.json ...
	I0115 02:46:16.927908   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/config.json: {Name:mk330cbed5eed2e620c037688309bba4d01be5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:16.928047   16000 start.go:360] acquireMachinesLock for addons-974059: {Name:mk08ca2fbfa7e17b9b93de9f109025291dd8cd1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 02:46:16.928105   16000 start.go:364] duration metric: took 43.446µs to acquireMachinesLock for "addons-974059"
	I0115 02:46:16.928127   16000 start.go:93] Provisioning new machine with config: &{Name:addons-974059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-974059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 02:46:16.928215   16000 start.go:125] createHost starting for "" (driver="kvm2")
	I0115 02:46:16.929841   16000 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0115 02:46:16.929957   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:46:16.929996   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:46:16.942429   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0115 02:46:16.942808   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:46:16.943296   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:46:16.943319   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:46:16.943639   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:46:16.943818   16000 main.go:141] libmachine: (addons-974059) Calling .GetMachineName
	I0115 02:46:16.943944   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:46:16.944073   16000 start.go:159] libmachine.API.Create for "addons-974059" (driver="kvm2")
	I0115 02:46:16.944099   16000 client.go:168] LocalClient.Create starting
	I0115 02:46:16.944130   16000 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem
	I0115 02:46:17.095835   16000 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem
	I0115 02:46:17.381462   16000 main.go:141] libmachine: Running pre-create checks...
	I0115 02:46:17.381485   16000 main.go:141] libmachine: (addons-974059) Calling .PreCreateCheck
	I0115 02:46:17.381987   16000 main.go:141] libmachine: (addons-974059) Calling .GetConfigRaw
	I0115 02:46:17.382431   16000 main.go:141] libmachine: Creating machine...
	I0115 02:46:17.382447   16000 main.go:141] libmachine: (addons-974059) Calling .Create
	I0115 02:46:17.382587   16000 main.go:141] libmachine: (addons-974059) Creating KVM machine...
	I0115 02:46:17.383792   16000 main.go:141] libmachine: (addons-974059) DBG | found existing default KVM network
	I0115 02:46:17.384501   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:17.384354   16022 network.go:208] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000149900}
	I0115 02:46:17.384543   16000 main.go:141] libmachine: (addons-974059) DBG | created network xml: 
	I0115 02:46:17.384565   16000 main.go:141] libmachine: (addons-974059) DBG | <network>
	I0115 02:46:17.384597   16000 main.go:141] libmachine: (addons-974059) DBG |   <name>mk-addons-974059</name>
	I0115 02:46:17.384610   16000 main.go:141] libmachine: (addons-974059) DBG |   <dns enable='no'/>
	I0115 02:46:17.384616   16000 main.go:141] libmachine: (addons-974059) DBG |   
	I0115 02:46:17.384622   16000 main.go:141] libmachine: (addons-974059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0115 02:46:17.384629   16000 main.go:141] libmachine: (addons-974059) DBG |     <dhcp>
	I0115 02:46:17.384637   16000 main.go:141] libmachine: (addons-974059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0115 02:46:17.384646   16000 main.go:141] libmachine: (addons-974059) DBG |     </dhcp>
	I0115 02:46:17.384656   16000 main.go:141] libmachine: (addons-974059) DBG |   </ip>
	I0115 02:46:17.384665   16000 main.go:141] libmachine: (addons-974059) DBG |   
	I0115 02:46:17.384671   16000 main.go:141] libmachine: (addons-974059) DBG | </network>
	I0115 02:46:17.384698   16000 main.go:141] libmachine: (addons-974059) DBG | 
	I0115 02:46:17.389905   16000 main.go:141] libmachine: (addons-974059) DBG | trying to create private KVM network mk-addons-974059 192.168.39.0/24...
	I0115 02:46:17.452304   16000 main.go:141] libmachine: (addons-974059) DBG | private KVM network mk-addons-974059 192.168.39.0/24 created
	I0115 02:46:17.452355   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:17.452277   16022 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:46:17.452391   16000 main.go:141] libmachine: (addons-974059) Setting up store path in /home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059 ...
	I0115 02:46:17.452417   16000 main.go:141] libmachine: (addons-974059) Building disk image from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 02:46:17.452438   16000 main.go:141] libmachine: (addons-974059) Downloading /home/jenkins/minikube-integration/17909-7685/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 02:46:17.684690   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:17.684566   16022 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa...
	I0115 02:46:17.818090   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:17.817973   16022 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/addons-974059.rawdisk...
	I0115 02:46:17.818119   16000 main.go:141] libmachine: (addons-974059) DBG | Writing magic tar header
	I0115 02:46:17.818135   16000 main.go:141] libmachine: (addons-974059) DBG | Writing SSH key tar header
	I0115 02:46:17.818155   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:17.818105   16022 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059 ...
	I0115 02:46:17.818282   16000 main.go:141] libmachine: (addons-974059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059
	I0115 02:46:17.818301   16000 main.go:141] libmachine: (addons-974059) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059 (perms=drwx------)
	I0115 02:46:17.818309   16000 main.go:141] libmachine: (addons-974059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines
	I0115 02:46:17.818318   16000 main.go:141] libmachine: (addons-974059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:46:17.818325   16000 main.go:141] libmachine: (addons-974059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685
	I0115 02:46:17.818333   16000 main.go:141] libmachine: (addons-974059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 02:46:17.818339   16000 main.go:141] libmachine: (addons-974059) DBG | Checking permissions on dir: /home/jenkins
	I0115 02:46:17.818346   16000 main.go:141] libmachine: (addons-974059) DBG | Checking permissions on dir: /home
	I0115 02:46:17.818353   16000 main.go:141] libmachine: (addons-974059) DBG | Skipping /home - not owner
	I0115 02:46:17.818364   16000 main.go:141] libmachine: (addons-974059) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines (perms=drwxr-xr-x)
	I0115 02:46:17.818383   16000 main.go:141] libmachine: (addons-974059) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube (perms=drwxr-xr-x)
	I0115 02:46:17.818412   16000 main.go:141] libmachine: (addons-974059) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685 (perms=drwxrwxr-x)
	I0115 02:46:17.818438   16000 main.go:141] libmachine: (addons-974059) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 02:46:17.818454   16000 main.go:141] libmachine: (addons-974059) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 02:46:17.818467   16000 main.go:141] libmachine: (addons-974059) Creating domain...
	I0115 02:46:17.819298   16000 main.go:141] libmachine: (addons-974059) define libvirt domain using xml: 
	I0115 02:46:17.819332   16000 main.go:141] libmachine: (addons-974059) <domain type='kvm'>
	I0115 02:46:17.819356   16000 main.go:141] libmachine: (addons-974059)   <name>addons-974059</name>
	I0115 02:46:17.819364   16000 main.go:141] libmachine: (addons-974059)   <memory unit='MiB'>4000</memory>
	I0115 02:46:17.819370   16000 main.go:141] libmachine: (addons-974059)   <vcpu>2</vcpu>
	I0115 02:46:17.819375   16000 main.go:141] libmachine: (addons-974059)   <features>
	I0115 02:46:17.819386   16000 main.go:141] libmachine: (addons-974059)     <acpi/>
	I0115 02:46:17.819418   16000 main.go:141] libmachine: (addons-974059)     <apic/>
	I0115 02:46:17.819429   16000 main.go:141] libmachine: (addons-974059)     <pae/>
	I0115 02:46:17.819440   16000 main.go:141] libmachine: (addons-974059)     
	I0115 02:46:17.819446   16000 main.go:141] libmachine: (addons-974059)   </features>
	I0115 02:46:17.819453   16000 main.go:141] libmachine: (addons-974059)   <cpu mode='host-passthrough'>
	I0115 02:46:17.819459   16000 main.go:141] libmachine: (addons-974059)   
	I0115 02:46:17.819465   16000 main.go:141] libmachine: (addons-974059)   </cpu>
	I0115 02:46:17.819471   16000 main.go:141] libmachine: (addons-974059)   <os>
	I0115 02:46:17.819482   16000 main.go:141] libmachine: (addons-974059)     <type>hvm</type>
	I0115 02:46:17.819500   16000 main.go:141] libmachine: (addons-974059)     <boot dev='cdrom'/>
	I0115 02:46:17.819511   16000 main.go:141] libmachine: (addons-974059)     <boot dev='hd'/>
	I0115 02:46:17.819523   16000 main.go:141] libmachine: (addons-974059)     <bootmenu enable='no'/>
	I0115 02:46:17.819530   16000 main.go:141] libmachine: (addons-974059)   </os>
	I0115 02:46:17.819535   16000 main.go:141] libmachine: (addons-974059)   <devices>
	I0115 02:46:17.819541   16000 main.go:141] libmachine: (addons-974059)     <disk type='file' device='cdrom'>
	I0115 02:46:17.819551   16000 main.go:141] libmachine: (addons-974059)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/boot2docker.iso'/>
	I0115 02:46:17.819565   16000 main.go:141] libmachine: (addons-974059)       <target dev='hdc' bus='scsi'/>
	I0115 02:46:17.819573   16000 main.go:141] libmachine: (addons-974059)       <readonly/>
	I0115 02:46:17.819578   16000 main.go:141] libmachine: (addons-974059)     </disk>
	I0115 02:46:17.819599   16000 main.go:141] libmachine: (addons-974059)     <disk type='file' device='disk'>
	I0115 02:46:17.819618   16000 main.go:141] libmachine: (addons-974059)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 02:46:17.819630   16000 main.go:141] libmachine: (addons-974059)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/addons-974059.rawdisk'/>
	I0115 02:46:17.819646   16000 main.go:141] libmachine: (addons-974059)       <target dev='hda' bus='virtio'/>
	I0115 02:46:17.819657   16000 main.go:141] libmachine: (addons-974059)     </disk>
	I0115 02:46:17.819665   16000 main.go:141] libmachine: (addons-974059)     <interface type='network'>
	I0115 02:46:17.819672   16000 main.go:141] libmachine: (addons-974059)       <source network='mk-addons-974059'/>
	I0115 02:46:17.819681   16000 main.go:141] libmachine: (addons-974059)       <model type='virtio'/>
	I0115 02:46:17.819686   16000 main.go:141] libmachine: (addons-974059)     </interface>
	I0115 02:46:17.819697   16000 main.go:141] libmachine: (addons-974059)     <interface type='network'>
	I0115 02:46:17.819704   16000 main.go:141] libmachine: (addons-974059)       <source network='default'/>
	I0115 02:46:17.819717   16000 main.go:141] libmachine: (addons-974059)       <model type='virtio'/>
	I0115 02:46:17.819725   16000 main.go:141] libmachine: (addons-974059)     </interface>
	I0115 02:46:17.819731   16000 main.go:141] libmachine: (addons-974059)     <serial type='pty'>
	I0115 02:46:17.819739   16000 main.go:141] libmachine: (addons-974059)       <target port='0'/>
	I0115 02:46:17.819747   16000 main.go:141] libmachine: (addons-974059)     </serial>
	I0115 02:46:17.819753   16000 main.go:141] libmachine: (addons-974059)     <console type='pty'>
	I0115 02:46:17.819766   16000 main.go:141] libmachine: (addons-974059)       <target type='serial' port='0'/>
	I0115 02:46:17.819776   16000 main.go:141] libmachine: (addons-974059)     </console>
	I0115 02:46:17.819781   16000 main.go:141] libmachine: (addons-974059)     <rng model='virtio'>
	I0115 02:46:17.819790   16000 main.go:141] libmachine: (addons-974059)       <backend model='random'>/dev/random</backend>
	I0115 02:46:17.819797   16000 main.go:141] libmachine: (addons-974059)     </rng>
	I0115 02:46:17.819805   16000 main.go:141] libmachine: (addons-974059)     
	I0115 02:46:17.819810   16000 main.go:141] libmachine: (addons-974059)     
	I0115 02:46:17.819818   16000 main.go:141] libmachine: (addons-974059)   </devices>
	I0115 02:46:17.819823   16000 main.go:141] libmachine: (addons-974059) </domain>
	I0115 02:46:17.819842   16000 main.go:141] libmachine: (addons-974059) 
	I0115 02:46:17.825699   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:75:d6:a2 in network default
	I0115 02:46:17.826196   16000 main.go:141] libmachine: (addons-974059) Ensuring networks are active...
	I0115 02:46:17.826219   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:17.826816   16000 main.go:141] libmachine: (addons-974059) Ensuring network default is active
	I0115 02:46:17.827098   16000 main.go:141] libmachine: (addons-974059) Ensuring network mk-addons-974059 is active
	I0115 02:46:17.827581   16000 main.go:141] libmachine: (addons-974059) Getting domain xml...
	I0115 02:46:17.828198   16000 main.go:141] libmachine: (addons-974059) Creating domain...
	I0115 02:46:19.173618   16000 main.go:141] libmachine: (addons-974059) Waiting to get IP...
	I0115 02:46:19.174360   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:19.174700   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:19.174738   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:19.174675   16022 retry.go:31] will retry after 212.979936ms: waiting for machine to come up
	I0115 02:46:19.389000   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:19.389388   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:19.389413   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:19.389337   16022 retry.go:31] will retry after 368.524535ms: waiting for machine to come up
	I0115 02:46:19.759766   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:19.760276   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:19.760307   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:19.760213   16022 retry.go:31] will retry after 474.874641ms: waiting for machine to come up
	I0115 02:46:20.236854   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:20.237263   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:20.237289   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:20.237206   16022 retry.go:31] will retry after 507.54757ms: waiting for machine to come up
	I0115 02:46:20.745789   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:20.746238   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:20.746273   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:20.746195   16022 retry.go:31] will retry after 657.593111ms: waiting for machine to come up
	I0115 02:46:21.406552   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:21.406918   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:21.406945   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:21.406868   16022 retry.go:31] will retry after 782.537803ms: waiting for machine to come up
	I0115 02:46:22.191121   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:22.191475   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:22.191505   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:22.191415   16022 retry.go:31] will retry after 976.6005ms: waiting for machine to come up
	I0115 02:46:23.169360   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:23.169758   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:23.169788   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:23.169705   16022 retry.go:31] will retry after 1.429222176s: waiting for machine to come up
	I0115 02:46:24.601053   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:24.601495   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:24.601521   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:24.601458   16022 retry.go:31] will retry after 1.32927144s: waiting for machine to come up
	I0115 02:46:25.932258   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:25.932557   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:25.932579   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:25.932525   16022 retry.go:31] will retry after 1.897792462s: waiting for machine to come up
	I0115 02:46:27.832345   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:27.832789   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:27.832808   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:27.832751   16022 retry.go:31] will retry after 2.637093238s: waiting for machine to come up
	I0115 02:46:30.472629   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:30.472999   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:30.473023   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:30.472981   16022 retry.go:31] will retry after 2.56960207s: waiting for machine to come up
	I0115 02:46:33.043789   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:33.044096   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:33.044125   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:33.044060   16022 retry.go:31] will retry after 3.470902181s: waiting for machine to come up
	I0115 02:46:36.518519   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:36.518855   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find current IP address of domain addons-974059 in network mk-addons-974059
	I0115 02:46:36.518877   16000 main.go:141] libmachine: (addons-974059) DBG | I0115 02:46:36.518817   16022 retry.go:31] will retry after 4.271775516s: waiting for machine to come up
	I0115 02:46:40.794930   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:40.795365   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has current primary IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:40.795381   16000 main.go:141] libmachine: (addons-974059) Found IP for machine: 192.168.39.115
	I0115 02:46:40.795406   16000 main.go:141] libmachine: (addons-974059) Reserving static IP address...
	I0115 02:46:40.795816   16000 main.go:141] libmachine: (addons-974059) DBG | unable to find host DHCP lease matching {name: "addons-974059", mac: "52:54:00:d6:47:28", ip: "192.168.39.115"} in network mk-addons-974059
	I0115 02:46:40.862992   16000 main.go:141] libmachine: (addons-974059) DBG | Getting to WaitForSSH function...
	I0115 02:46:40.863030   16000 main.go:141] libmachine: (addons-974059) Reserved static IP address: 192.168.39.115
	I0115 02:46:40.863051   16000 main.go:141] libmachine: (addons-974059) Waiting for SSH to be available...
	I0115 02:46:40.865318   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:40.865659   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:40.865702   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:40.865818   16000 main.go:141] libmachine: (addons-974059) DBG | Using SSH client type: external
	I0115 02:46:40.865840   16000 main.go:141] libmachine: (addons-974059) DBG | Using SSH private key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa (-rw-------)
	I0115 02:46:40.865871   16000 main.go:141] libmachine: (addons-974059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 02:46:40.865884   16000 main.go:141] libmachine: (addons-974059) DBG | About to run SSH command:
	I0115 02:46:40.865893   16000 main.go:141] libmachine: (addons-974059) DBG | exit 0
	I0115 02:46:40.958579   16000 main.go:141] libmachine: (addons-974059) DBG | SSH cmd err, output: <nil>: 
	I0115 02:46:40.958805   16000 main.go:141] libmachine: (addons-974059) KVM machine creation complete!
	I0115 02:46:40.959105   16000 main.go:141] libmachine: (addons-974059) Calling .GetConfigRaw
	I0115 02:46:40.959651   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:46:40.959850   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:46:40.959987   16000 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 02:46:40.960009   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:46:40.961271   16000 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 02:46:40.961285   16000 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 02:46:40.961290   16000 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 02:46:40.961297   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:40.963226   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:40.963615   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:40.963639   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:40.963764   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:40.963912   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:40.964048   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:40.964201   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:40.964348   16000 main.go:141] libmachine: Using SSH client type: native
	I0115 02:46:40.964725   16000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0115 02:46:40.964739   16000 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 02:46:41.074128   16000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:46:41.074157   16000 main.go:141] libmachine: Detecting the provisioner...
	I0115 02:46:41.074168   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.076606   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.076956   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.076985   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.077125   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:41.077325   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.077470   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.077595   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:41.077718   16000 main.go:141] libmachine: Using SSH client type: native
	I0115 02:46:41.078034   16000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0115 02:46:41.078049   16000 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 02:46:41.187591   16000 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 02:46:41.187696   16000 main.go:141] libmachine: found compatible host: buildroot
	I0115 02:46:41.187712   16000 main.go:141] libmachine: Provisioning with buildroot...
	I0115 02:46:41.187727   16000 main.go:141] libmachine: (addons-974059) Calling .GetMachineName
	I0115 02:46:41.187954   16000 buildroot.go:166] provisioning hostname "addons-974059"
	I0115 02:46:41.187975   16000 main.go:141] libmachine: (addons-974059) Calling .GetMachineName
	I0115 02:46:41.188129   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.190272   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.190600   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.190626   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.190759   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:41.190914   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.191063   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.191177   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:41.191326   16000 main.go:141] libmachine: Using SSH client type: native
	I0115 02:46:41.191677   16000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0115 02:46:41.191692   16000 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-974059 && echo "addons-974059" | sudo tee /etc/hostname
	I0115 02:46:41.310989   16000 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-974059
	
	I0115 02:46:41.311016   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.313401   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.313733   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.313764   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.313925   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:41.314107   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.314258   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.314372   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:41.314522   16000 main.go:141] libmachine: Using SSH client type: native
	I0115 02:46:41.314934   16000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0115 02:46:41.314953   16000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-974059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-974059/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-974059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 02:46:41.430958   16000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:46:41.430981   16000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17909-7685/.minikube CaCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17909-7685/.minikube}
	I0115 02:46:41.431014   16000 buildroot.go:174] setting up certificates
	I0115 02:46:41.431023   16000 provision.go:84] configureAuth start
	I0115 02:46:41.431033   16000 main.go:141] libmachine: (addons-974059) Calling .GetMachineName
	I0115 02:46:41.431307   16000 main.go:141] libmachine: (addons-974059) Calling .GetIP
	I0115 02:46:41.433658   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.433962   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.433992   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.434109   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.435887   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.436202   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.436229   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.436402   16000 provision.go:143] copyHostCerts
	I0115 02:46:41.436486   16000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem (1078 bytes)
	I0115 02:46:41.436639   16000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem (1123 bytes)
	I0115 02:46:41.436772   16000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem (1679 bytes)
	I0115 02:46:41.436860   16000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem org=jenkins.addons-974059 san=[127.0.0.1 192.168.39.115 addons-974059 localhost minikube]
	I0115 02:46:41.574481   16000 provision.go:177] copyRemoteCerts
	I0115 02:46:41.574540   16000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 02:46:41.574564   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.576998   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.577259   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.577282   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.577413   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:41.577571   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.577707   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:41.577814   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:46:41.660790   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 02:46:41.685706   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 02:46:41.710343   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0115 02:46:41.734494   16000 provision.go:87] duration metric: took 303.461515ms to configureAuth
	I0115 02:46:41.734523   16000 buildroot.go:189] setting minikube options for container-runtime
	I0115 02:46:41.734725   16000 config.go:182] Loaded profile config "addons-974059": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:46:41.734756   16000 main.go:141] libmachine: Checking connection to Docker...
	I0115 02:46:41.734768   16000 main.go:141] libmachine: (addons-974059) Calling .GetURL
	I0115 02:46:41.735796   16000 main.go:141] libmachine: (addons-974059) DBG | Using libvirt version 6000000
	I0115 02:46:41.737710   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.737997   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.738026   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.738199   16000 main.go:141] libmachine: Docker is up and running!
	I0115 02:46:41.738214   16000 main.go:141] libmachine: Reticulating splines...
	I0115 02:46:41.738223   16000 client.go:171] duration metric: took 24.794115819s to LocalClient.Create
	I0115 02:46:41.738248   16000 start.go:167] duration metric: took 24.794175795s to libmachine.API.Create "addons-974059"
	I0115 02:46:41.738321   16000 start.go:293] postStartSetup for "addons-974059" (driver="kvm2")
	I0115 02:46:41.738340   16000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 02:46:41.738368   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:46:41.738658   16000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 02:46:41.738687   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.740702   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.740993   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.741014   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.741161   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:41.741305   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.741428   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:41.741533   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:46:41.824343   16000 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 02:46:41.828842   16000 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 02:46:41.828865   16000 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/addons for local assets ...
	I0115 02:46:41.828936   16000 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/files for local assets ...
	I0115 02:46:41.828970   16000 start.go:296] duration metric: took 90.633221ms for postStartSetup
	I0115 02:46:41.829007   16000 main.go:141] libmachine: (addons-974059) Calling .GetConfigRaw
	I0115 02:46:41.829606   16000 main.go:141] libmachine: (addons-974059) Calling .GetIP
	I0115 02:46:41.832003   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.832320   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.832344   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.832622   16000 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/config.json ...
	I0115 02:46:41.832776   16000 start.go:128] duration metric: took 24.904551277s to createHost
	I0115 02:46:41.832796   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.834802   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.835115   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.835133   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.835238   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:41.835403   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.835564   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.835702   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:41.835882   16000 main.go:141] libmachine: Using SSH client type: native
	I0115 02:46:41.836180   16000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0115 02:46:41.836191   16000 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 02:46:41.947683   16000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705286801.928838678
	
	I0115 02:46:41.947712   16000 fix.go:216] guest clock: 1705286801.928838678
	I0115 02:46:41.947719   16000 fix.go:229] Guest: 2024-01-15 02:46:41.928838678 +0000 UTC Remote: 2024-01-15 02:46:41.832786426 +0000 UTC m=+25.017608184 (delta=96.052252ms)
	I0115 02:46:41.947745   16000 fix.go:200] guest clock delta is within tolerance: 96.052252ms
	I0115 02:46:41.947749   16000 start.go:83] releasing machines lock for "addons-974059", held for 25.019634525s
	I0115 02:46:41.947767   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:46:41.947958   16000 main.go:141] libmachine: (addons-974059) Calling .GetIP
	I0115 02:46:41.950370   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.950682   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.950700   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.950886   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:46:41.951318   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:46:41.951483   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:46:41.951589   16000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 02:46:41.951631   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.951698   16000 ssh_runner.go:195] Run: cat /version.json
	I0115 02:46:41.951723   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:46:41.954074   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.954328   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.954392   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.954419   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.954586   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:41.954689   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:41.954718   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:41.954765   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.954863   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:46:41.954925   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:41.954992   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:46:41.955045   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:46:41.955111   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:46:41.955214   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:46:42.064007   16000 ssh_runner.go:195] Run: systemctl --version
	I0115 02:46:42.069127   16000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 02:46:42.074141   16000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 02:46:42.074189   16000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 02:46:42.087416   16000 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 02:46:42.087434   16000 start.go:494] detecting cgroup driver to use...
	I0115 02:46:42.087482   16000 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 02:46:42.121944   16000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 02:46:42.132956   16000 docker.go:217] disabling cri-docker service (if available) ...
	I0115 02:46:42.133020   16000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 02:46:42.144192   16000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 02:46:42.155155   16000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 02:46:42.253475   16000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 02:46:42.369584   16000 docker.go:233] disabling docker service ...
	I0115 02:46:42.369657   16000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 02:46:42.382495   16000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 02:46:42.393071   16000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 02:46:42.487908   16000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 02:46:42.582733   16000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 02:46:42.594556   16000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 02:46:42.610287   16000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 02:46:42.618536   16000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 02:46:42.626810   16000 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 02:46:42.626858   16000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 02:46:42.635295   16000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:46:42.643584   16000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 02:46:42.651862   16000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:46:42.660298   16000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 02:46:42.668772   16000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 02:46:42.676935   16000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 02:46:42.684113   16000 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 02:46:42.684156   16000 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 02:46:42.694925   16000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 02:46:42.702273   16000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:46:42.795452   16000 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 02:46:42.822735   16000 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 02:46:42.822823   16000 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:46:42.827709   16000 retry.go:31] will retry after 1.484068718s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0115 02:46:44.313345   16000 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:46:44.318575   16000 start.go:562] Will wait 60s for crictl version
	I0115 02:46:44.318646   16000 ssh_runner.go:195] Run: which crictl
	I0115 02:46:44.322295   16000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 02:46:44.358914   16000 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0115 02:46:44.359004   16000 ssh_runner.go:195] Run: containerd --version
	I0115 02:46:44.395075   16000 ssh_runner.go:195] Run: containerd --version
	I0115 02:46:44.429169   16000 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0115 02:46:44.430568   16000 main.go:141] libmachine: (addons-974059) Calling .GetIP
	I0115 02:46:44.433105   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:44.433465   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:46:44.433493   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:46:44.433718   16000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 02:46:44.437619   16000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 02:46:44.448943   16000 kubeadm.go:877] updating cluster {Name:addons-974059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-974059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} ...
	I0115 02:46:44.449060   16000 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:46:44.449103   16000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:46:44.481489   16000 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 02:46:44.481549   16000 ssh_runner.go:195] Run: which lz4
	I0115 02:46:44.485195   16000 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 02:46:44.489139   16000 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 02:46:44.489167   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0115 02:46:46.228610   16000 containerd.go:548] duration metric: took 1.743456519s to copy over tarball
	I0115 02:46:46.228676   16000 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 02:46:49.137721   16000 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.909016864s)
	I0115 02:46:49.137748   16000 containerd.go:555] duration metric: took 2.909111407s to extract the tarball
	I0115 02:46:49.137755   16000 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 02:46:49.178728   16000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:46:49.289293   16000 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 02:46:49.310812   16000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:46:49.362826   16000 retry.go:31] will retry after 259.313775ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T02:46:49Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0115 02:46:49.622387   16000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:46:49.663850   16000 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 02:46:49.663870   16000 cache_images.go:84] Images are preloaded, skipping loading
	I0115 02:46:49.663878   16000 kubeadm.go:928] updating node { 192.168.39.115 8443 v1.28.4 containerd true true} ...
	I0115 02:46:49.664016   16000 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-974059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-974059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0115 02:46:49.664082   16000 ssh_runner.go:195] Run: sudo crictl info
	I0115 02:46:49.698737   16000 cni.go:84] Creating CNI manager for ""
	I0115 02:46:49.698766   16000 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 02:46:49.698777   16000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0115 02:46:49.698803   16000 kubeadm.go:180] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-974059 NodeName:addons-974059 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 02:46:49.698939   16000 kubeadm.go:186] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-974059"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 02:46:49.699014   16000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 02:46:49.707811   16000 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 02:46:49.707861   16000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 02:46:49.716096   16000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0115 02:46:49.731061   16000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 02:46:49.746034   16000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0115 02:46:49.762891   16000 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0115 02:46:49.766880   16000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 02:46:49.779028   16000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:46:49.886414   16000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 02:46:49.902226   16000 certs.go:68] Setting up /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059 for IP: 192.168.39.115
	I0115 02:46:49.902247   16000 certs.go:194] generating shared ca certs ...
	I0115 02:46:49.902266   16000 certs.go:226] acquiring lock for ca certs: {Name:mk4b44e68f01694cff17056fe1b88a9d17c4d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:49.902411   16000 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key
	I0115 02:46:50.124901   16000 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt ...
	I0115 02:46:50.124935   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt: {Name:mk84f79388e4ef9ca2bfb408bf40da936b161870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.125115   16000 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key ...
	I0115 02:46:50.125129   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key: {Name:mkef00d45fba457a8d5b65bfa136cab5b1b1cded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.125223   16000 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key
	I0115 02:46:50.380081   16000 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt ...
	I0115 02:46:50.380110   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt: {Name:mk82d0dfc245a4de377cc2af16cb956751ad8e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.380279   16000 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key ...
	I0115 02:46:50.380293   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key: {Name:mk7d80abbf551b0b6180d7e3ee2e4538c48730e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.380379   16000 certs.go:256] generating profile certs ...
	I0115 02:46:50.380452   16000 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.key
	I0115 02:46:50.380471   16000 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt with IP's: []
	I0115 02:46:50.482088   16000 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt ...
	I0115 02:46:50.482119   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: {Name:mkc36580ba966056b4acdfdac59b521af494b1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.482287   16000 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.key ...
	I0115 02:46:50.482301   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.key: {Name:mkd32dc3269c160214b7bdf3705bf0e486ea64a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.482394   16000 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.key.fabe566a
	I0115 02:46:50.482419   16000 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.crt.fabe566a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.115]
	I0115 02:46:50.563177   16000 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.crt.fabe566a ...
	I0115 02:46:50.563206   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.crt.fabe566a: {Name:mk2dfcfc18912effffda28ab96575aec7f93d8d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.563372   16000 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.key.fabe566a ...
	I0115 02:46:50.563405   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.key.fabe566a: {Name:mkd8e67d9aa1e41bf30165381990fb00098f1596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.563505   16000 certs.go:381] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.crt.fabe566a -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.crt
	I0115 02:46:50.563607   16000 certs.go:385] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.key.fabe566a -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.key
	I0115 02:46:50.563691   16000 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/proxy-client.key
	I0115 02:46:50.563716   16000 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/proxy-client.crt with IP's: []
	I0115 02:46:50.801137   16000 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/proxy-client.crt ...
	I0115 02:46:50.801166   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/proxy-client.crt: {Name:mka6cb41cf044806585e3ac666868c2016fe4482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.801318   16000 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/proxy-client.key ...
	I0115 02:46:50.801328   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/proxy-client.key: {Name:mkdf79469e147c5f119766c4b57d21dd8e4a7ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:46:50.801498   16000 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 02:46:50.801542   16000 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem (1078 bytes)
	I0115 02:46:50.801571   16000 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem (1123 bytes)
	I0115 02:46:50.801596   16000 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem (1679 bytes)
	I0115 02:46:50.802147   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 02:46:50.824846   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 02:46:50.846372   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 02:46:50.867364   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 02:46:50.888303   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0115 02:46:50.910247   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 02:46:50.931350   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 02:46:50.952160   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 02:46:50.973321   16000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 02:46:50.994665   16000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 02:46:51.009599   16000 ssh_runner.go:195] Run: openssl version
	I0115 02:46:51.014723   16000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 02:46:51.023935   16000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:46:51.028419   16000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 15 02:46 /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:46:51.028470   16000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:46:51.033593   16000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 02:46:51.042632   16000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0115 02:46:51.046503   16000 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0115 02:46:51.046562   16000 kubeadm.go:391] StartCluster: {Name:addons-974059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-974059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 02:46:51.046650   16000 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 02:46:51.046726   16000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 02:46:51.081994   16000 cri.go:89] found id: ""
	I0115 02:46:51.082048   16000 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 02:46:51.090356   16000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 02:46:51.098152   16000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 02:46:51.105905   16000 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 02:46:51.105918   16000 kubeadm.go:156] found existing configuration files:
	
	I0115 02:46:51.105947   16000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0115 02:46:51.113185   16000 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0115 02:46:51.113223   16000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0115 02:46:51.120862   16000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0115 02:46:51.128116   16000 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0115 02:46:51.128169   16000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0115 02:46:51.135808   16000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0115 02:46:51.143159   16000 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0115 02:46:51.143202   16000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0115 02:46:51.150803   16000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0115 02:46:51.158037   16000 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0115 02:46:51.158074   16000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0115 02:46:51.165708   16000 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0115 02:46:51.212664   16000 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0115 02:46:51.212864   16000 kubeadm.go:309] [preflight] Running pre-flight checks
	I0115 02:46:51.358833   16000 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 02:46:51.359103   16000 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 02:46:51.359240   16000 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 02:46:51.577038   16000 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 02:46:51.580319   16000 out.go:204]   - Generating certificates and keys ...
	I0115 02:46:51.580420   16000 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0115 02:46:51.580516   16000 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0115 02:46:51.691066   16000 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 02:46:51.856908   16000 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0115 02:46:52.304891   16000 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0115 02:46:52.690210   16000 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0115 02:46:52.986270   16000 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0115 02:46:52.986392   16000 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-974059 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0115 02:46:53.083254   16000 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0115 02:46:53.083477   16000 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-974059 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0115 02:46:53.470270   16000 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 02:46:53.556260   16000 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 02:46:53.851201   16000 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0115 02:46:53.851573   16000 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 02:46:53.948236   16000 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 02:46:54.027715   16000 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 02:46:54.296647   16000 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 02:46:54.449424   16000 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 02:46:54.449996   16000 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 02:46:54.454016   16000 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 02:46:54.456099   16000 out.go:204]   - Booting up control plane ...
	I0115 02:46:54.456183   16000 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 02:46:54.456251   16000 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 02:46:54.456322   16000 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 02:46:54.468786   16000 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 02:46:54.470067   16000 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 02:46:54.470105   16000 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0115 02:46:54.582228   16000 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 02:47:02.084668   16000 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.504783 seconds
	I0115 02:47:02.084848   16000 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 02:47:02.112136   16000 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 02:47:02.633250   16000 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 02:47:02.633456   16000 kubeadm.go:309] [mark-control-plane] Marking the node addons-974059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 02:47:03.147025   16000 kubeadm.go:309] [bootstrap-token] Using token: 6bjl52.vjk5d8v5wv3qa739
	I0115 02:47:03.148667   16000 out.go:204]   - Configuring RBAC rules ...
	I0115 02:47:03.148774   16000 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 02:47:03.154409   16000 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 02:47:03.162400   16000 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 02:47:03.165596   16000 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 02:47:03.172115   16000 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 02:47:03.175847   16000 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 02:47:03.190250   16000 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 02:47:03.413690   16000 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0115 02:47:03.568240   16000 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0115 02:47:03.569247   16000 kubeadm.go:309] 
	I0115 02:47:03.569320   16000 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0115 02:47:03.569347   16000 kubeadm.go:309] 
	I0115 02:47:03.569427   16000 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0115 02:47:03.569441   16000 kubeadm.go:309] 
	I0115 02:47:03.569481   16000 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0115 02:47:03.569541   16000 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 02:47:03.569625   16000 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 02:47:03.569639   16000 kubeadm.go:309] 
	I0115 02:47:03.569715   16000 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0115 02:47:03.569725   16000 kubeadm.go:309] 
	I0115 02:47:03.569799   16000 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 02:47:03.569811   16000 kubeadm.go:309] 
	I0115 02:47:03.569878   16000 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0115 02:47:03.570001   16000 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 02:47:03.570108   16000 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 02:47:03.570115   16000 kubeadm.go:309] 
	I0115 02:47:03.570235   16000 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 02:47:03.570361   16000 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0115 02:47:03.570373   16000 kubeadm.go:309] 
	I0115 02:47:03.570479   16000 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6bjl52.vjk5d8v5wv3qa739 \
	I0115 02:47:03.570606   16000 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 \
	I0115 02:47:03.570630   16000 kubeadm.go:309] 	--control-plane 
	I0115 02:47:03.570640   16000 kubeadm.go:309] 
	I0115 02:47:03.570726   16000 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0115 02:47:03.570735   16000 kubeadm.go:309] 
	I0115 02:47:03.570834   16000 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6bjl52.vjk5d8v5wv3qa739 \
	I0115 02:47:03.570976   16000 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 
	I0115 02:47:03.571727   16000 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 02:47:03.572539   16000 cni.go:84] Creating CNI manager for ""
	I0115 02:47:03.572557   16000 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 02:47:03.574255   16000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 02:47:03.575671   16000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 02:47:03.586761   16000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 02:47:03.603139   16000 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 02:47:03.603206   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:03.603225   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-974059 minikube.k8s.io/updated_at=2024_01_15T02_47_03_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b minikube.k8s.io/name=addons-974059 minikube.k8s.io/primary=true
	I0115 02:47:03.647412   16000 ops.go:34] apiserver oom_adj: -16
	I0115 02:47:03.879988   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:04.380566   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:04.880108   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:05.380112   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:05.880819   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:06.380904   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:06.880400   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:07.380595   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:07.880031   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:08.380586   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:08.880384   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:09.380990   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:09.880974   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:10.380932   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:10.880536   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:11.380329   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:11.881044   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:12.380137   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:12.880582   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:13.381017   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:13.880496   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:14.380586   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:14.880561   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:15.380422   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:15.880584   16000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:47:16.083826   16000 kubeadm.go:1106] duration metric: took 12.480673027s to wait for elevateKubeSystemPrivileges
	W0115 02:47:16.083865   16000 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0115 02:47:16.083873   16000 kubeadm.go:393] duration metric: took 25.037314932s to StartCluster
	I0115 02:47:16.083889   16000 settings.go:142] acquiring lock: {Name:mk9dadd460779833544b9ee743c73944f5d142f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:47:16.084019   16000 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:47:16.084506   16000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/kubeconfig: {Name:mkf5d0331212c9d6c1cc4e6eb80eacb35f40ffa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:47:16.084728   16000 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 02:47:16.084742   16000 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 02:47:16.086852   16000 out.go:177] * Verifying Kubernetes components...
	I0115 02:47:16.084955   16000 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0115 02:47:16.086921   16000 addons.go:69] Setting yakd=true in profile "addons-974059"
	I0115 02:47:16.086938   16000 addons.go:69] Setting ingress-dns=true in profile "addons-974059"
	I0115 02:47:16.086953   16000 addons.go:234] Setting addon yakd=true in "addons-974059"
	I0115 02:47:16.086954   16000 addons.go:69] Setting inspektor-gadget=true in profile "addons-974059"
	I0115 02:47:16.086989   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.086993   16000 addons.go:69] Setting volumesnapshots=true in profile "addons-974059"
	I0115 02:47:16.086999   16000 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-974059"
	I0115 02:47:16.086985   16000 addons.go:69] Setting metrics-server=true in profile "addons-974059"
	I0115 02:47:16.087013   16000 addons.go:69] Setting ingress=true in profile "addons-974059"
	I0115 02:47:16.087016   16000 addons.go:234] Setting addon volumesnapshots=true in "addons-974059"
	I0115 02:47:16.087020   16000 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-974059"
	I0115 02:47:16.087031   16000 addons.go:234] Setting addon ingress=true in "addons-974059"
	I0115 02:47:16.087036   16000 addons.go:234] Setting addon metrics-server=true in "addons-974059"
	I0115 02:47:16.087044   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.087027   16000 addons.go:69] Setting gcp-auth=true in profile "addons-974059"
	I0115 02:47:16.087053   16000 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-974059"
	I0115 02:47:16.087073   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.087083   16000 mustload.go:65] Loading cluster: addons-974059
	I0115 02:47:16.087092   16000 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-974059"
	I0115 02:47:16.087094   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.087112   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.087293   16000 config.go:182] Loaded profile config "addons-974059": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:47:16.087433   16000 addons.go:69] Setting default-storageclass=true in profile "addons-974059"
	I0115 02:47:16.087443   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.087007   16000 addons.go:69] Setting helm-tiller=true in profile "addons-974059"
	I0115 02:47:16.087456   16000 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-974059"
	I0115 02:47:16.087464   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.087464   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.087475   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.087481   16000 addons.go:234] Setting addon helm-tiller=true in "addons-974059"
	I0115 02:47:16.087486   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.087488   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.087502   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.087510   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.086948   16000 addons.go:69] Setting registry=true in profile "addons-974059"
	I0115 02:47:16.087550   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.087563   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.087585   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.087693   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.087709   16000 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-974059"
	I0115 02:47:16.087726   16000 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-974059"
	I0115 02:47:16.087735   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.087786   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.087810   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.087902   16000 addons.go:69] Setting storage-provisioner=true in profile "addons-974059"
	I0115 02:47:16.087941   16000 addons.go:234] Setting addon storage-provisioner=true in "addons-974059"
	I0115 02:47:16.087044   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.087971   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.088089   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.088124   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.086994   16000 addons.go:234] Setting addon inspektor-gadget=true in "addons-974059"
	I0115 02:47:16.085538   16000 config.go:182] Loaded profile config "addons-974059": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:47:16.087002   16000 addons.go:69] Setting cloud-spanner=true in profile "addons-974059"
	I0115 02:47:16.088215   16000 addons.go:234] Setting addon cloud-spanner=true in "addons-974059"
	I0115 02:47:16.088237   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.088307   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.088330   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.086995   16000 addons.go:234] Setting addon ingress-dns=true in "addons-974059"
	I0115 02:47:16.088361   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.088393   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.087550   16000 addons.go:234] Setting addon registry=true in "addons-974059"
	I0115 02:47:16.088480   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.088503   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.088550   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.088565   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.090109   16000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:47:16.088582   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.088600   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.088617   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.100060   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.100116   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.100142   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.100188   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.107037   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37811
	I0115 02:47:16.107113   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36803
	I0115 02:47:16.107317   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0115 02:47:16.107458   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.107461   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.107668   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39051
	I0115 02:47:16.107952   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.107958   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.107975   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.108012   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.108030   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.108080   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.108379   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.108389   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.108396   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.108396   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.108828   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.108848   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.109155   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.109170   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.109227   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.109277   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.109349   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.113242   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45667
	I0115 02:47:16.119832   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.119877   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.119883   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.119908   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.121837   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.121868   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.122653   16000 addons.go:234] Setting addon default-storageclass=true in "addons-974059"
	I0115 02:47:16.122691   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.122954   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.122995   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.128362   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I0115 02:47:16.128568   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.128866   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.129102   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.129117   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.129187   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36947
	I0115 02:47:16.129428   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.129442   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.129502   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.129644   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.130094   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.130186   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.130683   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.130703   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.130749   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.130797   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.132336   16000 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-974059"
	I0115 02:47:16.132375   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.132689   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.132707   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.138712   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0115 02:47:16.138900   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.139164   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.139242   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.141154   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:16.141556   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.141592   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.141838   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0115 02:47:16.142090   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.142105   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.143132   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.143240   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.143808   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.143837   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.144124   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.144139   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.144537   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.145079   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.145120   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.148236   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I0115 02:47:16.148813   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.149427   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.149446   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.150237   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.150834   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.150879   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.151811   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
	I0115 02:47:16.155781   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.156539   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.156559   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.156965   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.157762   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.157799   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.160646   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0115 02:47:16.161052   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.161662   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.161682   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.162100   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.162285   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.164023   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.164117   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39659
	I0115 02:47:16.166663   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0115 02:47:16.164628   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.169617   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0115 02:47:16.168677   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.171460   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.173122   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0115 02:47:16.172219   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.174419   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0115 02:47:16.176423   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0115 02:47:16.176077   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I0115 02:47:16.176337   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.178348   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38921
	I0115 02:47:16.178362   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0115 02:47:16.179496   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0115 02:47:16.179574   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.179838   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.180018   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.180172   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35219
	I0115 02:47:16.180392   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.181698   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0115 02:47:16.182090   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.183068   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.182262   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.183132   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.182406   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.183216   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.182420   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.183045   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0115 02:47:16.184720   16000 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0115 02:47:16.184737   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0115 02:47:16.184754   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.183529   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.183552   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.183689   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.184129   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.184857   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.185007   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.185081   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.185135   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.185305   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.185965   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.185994   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.187478   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.189235   16000 out.go:177]   - Using image docker.io/registry:2.8.3
	I0115 02:47:16.188597   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.189111   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.189448   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40261
	I0115 02:47:16.189550   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.189764   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.191336   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0115 02:47:16.191657   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0115 02:47:16.191741   16000 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0115 02:47:16.191923   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.193059   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.193065   16000 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0115 02:47:16.192006   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0115 02:47:16.192197   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.193118   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0115 02:47:16.192382   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.192429   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0115 02:47:16.193253   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0115 02:47:16.193270   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.194524   16000 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0115 02:47:16.192870   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.192922   16000 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 02:47:16.193921   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0115 02:47:16.193936   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.193954   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.193979   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.194504   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.194813   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.195241   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.195791   16000 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 02:47:16.195802   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0115 02:47:16.195820   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.195868   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 02:47:16.195879   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.196038   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.196853   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0115 02:47:16.197063   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.197078   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.197134   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.197130   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.197169   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0115 02:47:16.197400   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.197413   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.197463   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.197606   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.197618   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.197667   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.197728   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.197749   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.197949   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.198089   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.198106   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.198184   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.198210   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.199265   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.199267   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.199315   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.199369   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.199373   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.199423   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.199767   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.199825   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.199951   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.199988   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.199956   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.200214   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.200470   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.200594   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.200605   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.200655   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.200711   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.201138   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.201300   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.201340   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.201355   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.201520   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.201666   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.201677   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.201741   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.201756   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.202042   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.202061   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.202093   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.202146   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.202186   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.202233   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.202357   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.202418   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.203455   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.203510   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.203569   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.203609   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.204069   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.204133   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.204176   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.204475   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.204521   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.204629   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.206610   16000 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0115 02:47:16.205051   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.205082   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.205239   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.205764   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.207981   16000 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0115 02:47:16.208013   16000 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0115 02:47:16.209381   16000 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0115 02:47:16.208026   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0115 02:47:16.209401   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0115 02:47:16.209418   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.209423   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.207981   16000 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0115 02:47:16.208811   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:16.211854   16000 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 02:47:16.210813   16000 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 02:47:16.210855   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:16.212460   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0115 02:47:16.212861   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.213026   16000 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 02:47:16.213031   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 02:47:16.213039   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 02:47:16.213051   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.213054   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.213320   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.213404   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.213723   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.213738   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.213802   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.213821   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.213993   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.214058   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.214153   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.214199   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.214355   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.214502   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.216312   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.216406   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.216637   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.216970   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.217028   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.217044   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.217067   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.217201   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.217306   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.217364   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.217520   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.217551   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.217575   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.217585   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.217722   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.217872   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.218104   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.218300   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.220000   16000 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0115 02:47:16.218437   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.221330   16000 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0115 02:47:16.221348   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0115 02:47:16.221363   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.221472   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.221622   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.224273   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.225431   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0115 02:47:16.225777   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.226369   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35719
	I0115 02:47:16.226426   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.226440   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.226447   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
	I0115 02:47:16.226853   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.226879   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.226919   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.227019   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.227282   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.227296   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.227308   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.227323   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.227640   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.227756   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.227780   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.227850   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.227964   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.228413   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.228555   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.228663   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.228718   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.229244   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.230805   16000 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0115 02:47:16.229358   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.229488   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.231882   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0115 02:47:16.232289   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.232294   16000 out.go:177]   - Using image docker.io/busybox:stable
	I0115 02:47:16.233727   16000 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 02:47:16.233741   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0115 02:47:16.233757   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.232480   16000 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0115 02:47:16.232645   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.236065   16000 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0115 02:47:16.235068   16000 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0115 02:47:16.235525   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.236234   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.237150   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.237173   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.237245   16000 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0115 02:47:16.237254   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0115 02:47:16.237254   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.237267   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.236676   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.237308   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0115 02:47:16.237321   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.238428   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.238483   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.238863   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.238928   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.239108   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.240349   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.240798   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.242432   16000 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0115 02:47:16.241015   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.243992   16000 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 02:47:16.241305   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.241506   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35823
	I0115 02:47:16.242192   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.242457   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.242859   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.245194   16000 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 02:47:16.244131   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.244283   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.244303   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.244530   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:16.246332   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.246503   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.246550   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.246584   16000 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 02:47:16.246599   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0115 02:47:16.246612   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.246627   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.246723   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.246768   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:16.246780   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:16.247082   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:16.247316   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:16.249394   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:16.249877   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.251314   16000 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0115 02:47:16.252802   16000 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 02:47:16.252822   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0115 02:47:16.252840   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:16.250413   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.250426   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	W0115 02:47:16.250641   16000 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44006->192.168.39.115:22: read: connection reset by peer
	I0115 02:47:16.252909   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.252931   16000 retry.go:31] will retry after 356.94905ms: ssh: handshake failed: read tcp 192.168.39.1:44006->192.168.39.115:22: read: connection reset by peer
	W0115 02:47:16.250723   16000 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44012->192.168.39.115:22: read: connection reset by peer
	I0115 02:47:16.252955   16000 retry.go:31] will retry after 130.505667ms: ssh: handshake failed: read tcp 192.168.39.1:44012->192.168.39.115:22: read: connection reset by peer
	I0115 02:47:16.253017   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.253193   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.253332   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:16.255643   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.256082   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:16.256117   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:16.256219   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:16.256366   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:16.256504   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:16.256670   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	W0115 02:47:16.385252   16000 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44052->192.168.39.115:22: read: connection reset by peer
	I0115 02:47:16.385293   16000 retry.go:31] will retry after 278.398162ms: ssh: handshake failed: read tcp 192.168.39.1:44052->192.168.39.115:22: read: connection reset by peer
	I0115 02:47:16.637859   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 02:47:16.684134   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 02:47:16.839255   16000 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0115 02:47:16.839278   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0115 02:47:16.934419   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 02:47:16.941794   16000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 02:47:16.943349   16000 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 02:47:17.016264   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 02:47:17.018861   16000 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0115 02:47:17.018880   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0115 02:47:17.027860   16000 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0115 02:47:17.027883   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0115 02:47:17.040260   16000 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 02:47:17.040279   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0115 02:47:17.070124   16000 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0115 02:47:17.070144   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0115 02:47:17.094241   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 02:47:17.112832   16000 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 02:47:17.112853   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0115 02:47:17.204777   16000 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0115 02:47:17.204795   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0115 02:47:17.314061   16000 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0115 02:47:17.314090   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0115 02:47:17.387680   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0115 02:47:17.415430   16000 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0115 02:47:17.415451   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0115 02:47:17.416689   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 02:47:17.421798   16000 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 02:47:17.421813   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 02:47:17.428347   16000 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0115 02:47:17.428362   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0115 02:47:17.508017   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 02:47:17.570096   16000 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0115 02:47:17.570119   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0115 02:47:17.592783   16000 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0115 02:47:17.592807   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0115 02:47:17.654508   16000 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0115 02:47:17.654536   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0115 02:47:17.772609   16000 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0115 02:47:17.772631   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0115 02:47:17.822073   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0115 02:47:17.822928   16000 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0115 02:47:17.822950   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0115 02:47:17.920807   16000 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0115 02:47:17.920836   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0115 02:47:17.946900   16000 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 02:47:17.946925   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 02:47:18.012255   16000 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0115 02:47:18.012284   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0115 02:47:18.019724   16000 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0115 02:47:18.019745   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0115 02:47:18.162860   16000 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0115 02:47:18.162884   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0115 02:47:18.177274   16000 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0115 02:47:18.177294   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0115 02:47:18.431962   16000 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 02:47:18.431984   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0115 02:47:18.436737   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 02:47:18.439647   16000 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0115 02:47:18.439665   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0115 02:47:18.553706   16000 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0115 02:47:18.553737   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0115 02:47:18.558301   16000 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0115 02:47:18.558318   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0115 02:47:18.627980   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0115 02:47:18.791223   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 02:47:18.920379   16000 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0115 02:47:18.920411   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0115 02:47:19.039981   16000 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0115 02:47:19.040011   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0115 02:47:19.220923   16000 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0115 02:47:19.220948   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0115 02:47:19.456969   16000 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0115 02:47:19.456994   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0115 02:47:19.672302   16000 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0115 02:47:19.672323   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0115 02:47:19.829581   16000 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 02:47:19.829608   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0115 02:47:19.935173   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 02:47:20.051927   16000 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 02:47:20.051949   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0115 02:47:20.168893   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 02:47:22.832462   16000 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0115 02:47:22.832527   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:22.835797   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:22.836336   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:22.836365   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:22.836563   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:22.836756   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:22.836917   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:22.837055   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:23.322953   16000 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0115 02:47:23.527957   16000 addons.go:234] Setting addon gcp-auth=true in "addons-974059"
	I0115 02:47:23.528013   16000 host.go:66] Checking if "addons-974059" exists ...
	I0115 02:47:23.528324   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:23.528353   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:23.543428   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43431
	I0115 02:47:23.543787   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:23.544245   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:23.544270   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:23.544568   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:23.545131   16000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:47:23.545178   16000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:47:23.559189   16000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
	I0115 02:47:23.559582   16000 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:47:23.560039   16000 main.go:141] libmachine: Using API Version  1
	I0115 02:47:23.560053   16000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:47:23.560415   16000 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:47:23.560588   16000 main.go:141] libmachine: (addons-974059) Calling .GetState
	I0115 02:47:23.562123   16000 main.go:141] libmachine: (addons-974059) Calling .DriverName
	I0115 02:47:23.562300   16000 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0115 02:47:23.562321   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHHostname
	I0115 02:47:23.564862   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:23.565202   16000 main.go:141] libmachine: (addons-974059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:47:28", ip: ""} in network mk-addons-974059: {Iface:virbr1 ExpiryTime:2024-01-15 03:46:33 +0000 UTC Type:0 Mac:52:54:00:d6:47:28 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:addons-974059 Clientid:01:52:54:00:d6:47:28}
	I0115 02:47:23.565238   16000 main.go:141] libmachine: (addons-974059) DBG | domain addons-974059 has defined IP address 192.168.39.115 and MAC address 52:54:00:d6:47:28 in network mk-addons-974059
	I0115 02:47:23.565392   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHPort
	I0115 02:47:23.565560   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHKeyPath
	I0115 02:47:23.565693   16000 main.go:141] libmachine: (addons-974059) Calling .GetSSHUsername
	I0115 02:47:23.565814   16000 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/addons-974059/id_rsa Username:docker}
	I0115 02:47:25.590638   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.906475875s)
	I0115 02:47:25.590711   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:25.590702   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.952808492s)
	I0115 02:47:25.590752   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:25.590768   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:25.590723   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:25.591044   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:25.591062   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:25.591074   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:25.591084   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:25.591104   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:25.591143   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:25.591155   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:25.591168   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:25.591180   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:25.591265   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:25.591278   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:25.591513   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:25.591532   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:25.591552   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:25.610955   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:25.610978   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:25.611205   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:25.611225   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.209607   16000 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.267778364s)
	I0115 02:47:28.209699   16000 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (11.266322112s)
	I0115 02:47:28.209741   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.193441379s)
	I0115 02:47:28.209778   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.115510287s)
	I0115 02:47:28.209822   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.209838   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.209785   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.209874   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.793165682s)
	I0115 02:47:28.209883   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.209744   16000 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0115 02:47:28.209911   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.701865685s)
	I0115 02:47:28.209939   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.387838146s)
	I0115 02:47:28.209947   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.209958   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.209962   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.209975   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210013   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.773245813s)
	I0115 02:47:28.210036   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210043   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.582034889s)
	I0115 02:47:28.210047   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210061   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210073   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210220   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.418921657s)
	W0115 02:47:28.210255   16000 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 02:47:28.209822   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.822118548s)
	I0115 02:47:28.210302   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210334   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.210314   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.210337   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210276   16000 retry.go:31] will retry after 183.620349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 02:47:28.209907   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.275451505s)
	I0115 02:47:28.210433   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210436   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.210472   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.210489   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.210516   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.210527   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.210535   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210492   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210564   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210721   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.210751   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.210769   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.210778   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210787   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210832   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.210851   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.210860   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.210869   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210878   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210473   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.210909   16000 node_ready.go:35] waiting up to 6m0s for node "addons-974059" to be "Ready" ...
	I0115 02:47:28.210922   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.210930   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.210939   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210448   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.210998   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.211028   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.209902   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.211045   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.211048   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.211056   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.211064   16000 addons.go:470] Verifying addon metrics-server=true in "addons-974059"
	I0115 02:47:28.210544   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.211405   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.211429   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.211437   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.211446   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.211454   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.211491   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.211510   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.211518   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.211527   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.211535   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.211573   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.211590   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.211598   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.211606   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.211616   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.212108   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.212133   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.212142   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.212322   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.212344   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.212353   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.212494   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.212515   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.212522   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.212851   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.212861   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.212869   16000 addons.go:470] Verifying addon registry=true in "addons-974059"
	I0115 02:47:28.215108   16000 out.go:177] * Verifying registry addon...
	I0115 02:47:28.213531   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.215343   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.215354   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.215361   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.215371   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.215379   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.216393   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.216400   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.215409   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.216454   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.217369   16000 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0115 02:47:28.217981   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.217982   16000 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-974059 service yakd-dashboard -n yakd-dashboard
	
	I0115 02:47:28.218003   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.219096   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.219333   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.219348   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.219357   16000 addons.go:470] Verifying addon ingress=true in "addons-974059"
	I0115 02:47:28.219367   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.220831   16000 out.go:177] * Verifying ingress addon...
	I0115 02:47:28.220452   16000 node_ready.go:49] node "addons-974059" has status "Ready":"True"
	I0115 02:47:28.222176   16000 node_ready.go:38] duration metric: took 11.210843ms for node "addons-974059" to be "Ready" ...
	I0115 02:47:28.222196   16000 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 02:47:28.222667   16000 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0115 02:47:28.233067   16000 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 02:47:28.233086   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:28.238463   16000 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0115 02:47:28.238478   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:28.250828   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:28.250849   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:28.251089   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:28.251116   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:28.251130   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:28.255843   16000 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5r94g" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.275496   16000 pod_ready.go:92] pod "coredns-5dd5756b68-5r94g" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:28.275521   16000 pod_ready.go:81] duration metric: took 19.656023ms for pod "coredns-5dd5756b68-5r94g" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.275533   16000 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gc5mq" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.306140   16000 pod_ready.go:92] pod "coredns-5dd5756b68-gc5mq" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:28.306167   16000 pod_ready.go:81] duration metric: took 30.625266ms for pod "coredns-5dd5756b68-gc5mq" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.306180   16000 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-974059" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.341273   16000 pod_ready.go:92] pod "etcd-addons-974059" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:28.341321   16000 pod_ready.go:81] duration metric: took 35.127458ms for pod "etcd-addons-974059" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.341343   16000 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-974059" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.355383   16000 pod_ready.go:92] pod "kube-apiserver-addons-974059" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:28.355418   16000 pod_ready.go:81] duration metric: took 14.065668ms for pod "kube-apiserver-addons-974059" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.355430   16000 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-974059" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.394366   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 02:47:28.615463   16000 pod_ready.go:92] pod "kube-controller-manager-addons-974059" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:28.615489   16000 pod_ready.go:81] duration metric: took 260.049055ms for pod "kube-controller-manager-addons-974059" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.615503   16000 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bsgmf" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:28.713933   16000 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-974059" context rescaled to 1 replicas
	I0115 02:47:28.723788   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:28.726608   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:29.018542   16000 pod_ready.go:92] pod "kube-proxy-bsgmf" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:29.018565   16000 pod_ready.go:81] duration metric: took 403.05404ms for pod "kube-proxy-bsgmf" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:29.018574   16000 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-974059" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:29.225030   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:29.231861   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:29.438008   16000 pod_ready.go:92] pod "kube-scheduler-addons-974059" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:29.438032   16000 pod_ready.go:81] duration metric: took 419.450742ms for pod "kube-scheduler-addons-974059" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:29.438045   16000 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:29.723814   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:29.730024   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:30.254089   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:30.270043   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:30.344549   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.409327991s)
	I0115 02:47:30.344601   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:30.344606   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.175670883s)
	I0115 02:47:30.344644   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:30.344661   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:30.344616   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:30.344645   16000 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.782323423s)
	I0115 02:47:30.346407   16000 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 02:47:30.344910   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:30.345021   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:30.345057   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:30.345120   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:30.347763   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:30.347775   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:30.347798   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:30.347813   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:30.349083   16000 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0115 02:47:30.347780   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:30.348158   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:30.348162   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:30.350285   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:30.350317   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:30.350317   16000 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0115 02:47:30.350398   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0115 02:47:30.350552   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:30.350573   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:30.350583   16000 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-974059"
	I0115 02:47:30.351955   16000 out.go:177] * Verifying csi-hostpath-driver addon...
	I0115 02:47:30.353744   16000 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0115 02:47:30.394489   16000 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 02:47:30.394510   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:30.545673   16000 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0115 02:47:30.545697   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0115 02:47:30.625208   16000 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 02:47:30.625226   16000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0115 02:47:30.730601   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:30.733342   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:30.742620   16000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 02:47:30.863563   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:31.223812   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:31.227089   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:31.360331   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:31.443444   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:31.753159   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:31.753286   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:31.868348   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:32.058209   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.663788779s)
	I0115 02:47:32.058273   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:32.058294   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:32.058619   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:32.058645   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:32.058663   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:32.058681   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:32.058696   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:32.058933   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:32.059438   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:32.059004   16000 main.go:141] libmachine: (addons-974059) DBG | Closing plugin on server side
	I0115 02:47:32.225293   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:32.227754   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:32.360749   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:32.730125   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:32.732464   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:32.861671   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:32.950851   16000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.208201259s)
	I0115 02:47:32.950903   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:32.950916   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:32.951165   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:32.951181   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:32.951245   16000 main.go:141] libmachine: Making call to close driver server
	I0115 02:47:32.951258   16000 main.go:141] libmachine: (addons-974059) Calling .Close
	I0115 02:47:32.951532   16000 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:47:32.951547   16000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:47:32.953271   16000 addons.go:470] Verifying addon gcp-auth=true in "addons-974059"
	I0115 02:47:32.954914   16000 out.go:177] * Verifying gcp-auth addon...
	I0115 02:47:32.957309   16000 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0115 02:47:32.967532   16000 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0115 02:47:32.967547   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:33.223665   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:33.226751   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:33.359138   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:33.444339   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:33.461129   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:33.726676   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:33.729443   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:33.859091   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:33.959953   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:34.222899   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:34.226320   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:34.359841   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:34.463348   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:34.724288   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:34.729142   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:34.859592   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:34.961402   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:35.222670   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:35.226842   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:35.360768   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:35.460815   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:35.723434   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:35.727503   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:35.861513   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:35.944552   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:35.961461   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:36.224586   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:36.229159   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:36.360584   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:36.462423   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:36.724658   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:36.727303   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:36.859748   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:36.960563   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:37.223224   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:37.226941   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:37.360209   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:37.460994   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:37.722794   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:37.726897   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:37.859670   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:37.960711   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:38.223402   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:38.226722   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:38.360683   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:38.444906   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:38.460940   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:38.723143   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:38.726395   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:38.866031   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:38.960127   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:39.226985   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:39.229107   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:39.359236   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:39.460693   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:39.724882   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:39.726967   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:39.859623   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:39.961201   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:40.222991   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:40.226680   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:40.360066   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:40.461562   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:40.724226   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:40.728074   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:40.862229   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:40.948169   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:40.962805   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:41.223808   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:41.227561   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:41.361944   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:41.462113   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:41.723568   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:41.727534   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:41.860036   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:41.961153   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:42.223084   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:42.227646   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:42.360233   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:42.460465   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:42.723873   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:42.727266   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:42.859046   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:42.961308   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:43.222828   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:43.227473   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:43.359460   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:43.445484   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:43.461301   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:43.732739   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:43.734587   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:43.859623   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:43.960886   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:44.223528   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:44.226516   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:44.364281   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:44.460925   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:44.729910   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:44.732303   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:44.859891   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:45.171179   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:45.223542   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:45.226400   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:45.359020   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:45.460965   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:45.724044   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:45.727215   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:45.861290   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:45.944966   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:45.960996   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:46.223695   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:46.227098   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:46.360367   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:46.461725   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:46.725432   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:46.728782   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:46.860524   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:46.961444   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:47.224251   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:47.227346   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:47.360625   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:47.468544   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:47.724045   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:47.726640   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:47.859806   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:47.945110   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:47.962663   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:48.224701   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:48.226078   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:48.360463   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:48.460883   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:48.724813   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:48.727547   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:48.859773   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:48.961055   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:49.223730   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:49.226241   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:49.360193   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:49.461512   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:49.723243   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:49.726807   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:49.859764   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:49.946241   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:49.961645   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:50.223343   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:50.226342   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:50.359484   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:50.461922   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:50.724052   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:50.766849   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:50.859538   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:50.960510   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:51.222510   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:51.226864   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:51.359887   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:51.466124   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:51.731105   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:51.734505   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:51.860232   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:51.960214   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:52.222556   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:52.227495   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:52.358793   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:52.448371   16000 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"False"
	I0115 02:47:52.461109   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:52.724291   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:52.727746   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:52.860588   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:52.944380   16000 pod_ready.go:92] pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:52.944408   16000 pod_ready.go:81] duration metric: took 23.50635472s for pod "metrics-server-7c66d45ddc-mc2hw" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:52.944419   16000 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hq969" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:52.948945   16000 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-hq969" in "kube-system" namespace has status "Ready":"True"
	I0115 02:47:52.948964   16000 pod_ready.go:81] duration metric: took 4.539172ms for pod "nvidia-device-plugin-daemonset-hq969" in "kube-system" namespace to be "Ready" ...
	I0115 02:47:52.948981   16000 pod_ready.go:38] duration metric: took 24.72677293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 02:47:52.948994   16000 api_server.go:52] waiting for apiserver process to appear ...
	I0115 02:47:52.949044   16000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 02:47:52.963135   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:52.964351   16000 api_server.go:72] duration metric: took 36.879579967s to wait for apiserver process to appear ...
	I0115 02:47:52.964368   16000 api_server.go:88] waiting for apiserver healthz status ...
	I0115 02:47:52.964383   16000 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0115 02:47:52.969024   16000 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0115 02:47:52.970032   16000 api_server.go:141] control plane version: v1.28.4
	I0115 02:47:52.970056   16000 api_server.go:131] duration metric: took 5.681923ms to wait for apiserver health ...
	I0115 02:47:52.970065   16000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 02:47:52.979139   16000 system_pods.go:59] 18 kube-system pods found
	I0115 02:47:52.979173   16000 system_pods.go:61] "coredns-5dd5756b68-5r94g" [bd032c75-d5d4-4ca5-baba-37e836c67a51] Running
	I0115 02:47:52.979182   16000 system_pods.go:61] "csi-hostpath-attacher-0" [d67653c4-d323-41e4-9ed3-903ede6e0715] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 02:47:52.979190   16000 system_pods.go:61] "csi-hostpath-resizer-0" [a083070c-bd53-4a91-b898-a19af6e26463] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 02:47:52.979198   16000 system_pods.go:61] "csi-hostpathplugin-lmkq2" [d2324a15-3260-4489-8097-146158413b79] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 02:47:52.979213   16000 system_pods.go:61] "etcd-addons-974059" [f8d94d1a-76d6-4afe-af6c-0d5a2d278173] Running
	I0115 02:47:52.979218   16000 system_pods.go:61] "kube-apiserver-addons-974059" [808fa781-a531-4f06-8b1d-575eaed8f7ed] Running
	I0115 02:47:52.979222   16000 system_pods.go:61] "kube-controller-manager-addons-974059" [c33696fd-dcf6-44f9-a631-539e68ed8bf3] Running
	I0115 02:47:52.979227   16000 system_pods.go:61] "kube-ingress-dns-minikube" [31b1ce58-7dbf-42d0-a6f4-2edb76873274] Running
	I0115 02:47:52.979234   16000 system_pods.go:61] "kube-proxy-bsgmf" [bb367bbf-eab9-47c5-9c65-c98dfebb6ac0] Running
	I0115 02:47:52.979239   16000 system_pods.go:61] "kube-scheduler-addons-974059" [7d3c37f8-37da-4d2e-834e-d4b9c324d5fc] Running
	I0115 02:47:52.979245   16000 system_pods.go:61] "metrics-server-7c66d45ddc-mc2hw" [46aae371-3052-4919-8103-27e76a8d869a] Running
	I0115 02:47:52.979249   16000 system_pods.go:61] "nvidia-device-plugin-daemonset-hq969" [7bed1f75-9fa1-4caa-bad7-a0809fe0e985] Running
	I0115 02:47:52.979257   16000 system_pods.go:61] "registry-lxlqs" [9e31c26e-4abb-4384-bc2e-5ea1be84e604] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0115 02:47:52.979265   16000 system_pods.go:61] "registry-proxy-5ndqf" [5c77c7c4-f5be-4480-bea1-b1f2286e3b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0115 02:47:52.979274   16000 system_pods.go:61] "snapshot-controller-58dbcc7b99-dwptx" [70860404-00b9-40d8-8203-1eee013d3134] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0115 02:47:52.979284   16000 system_pods.go:61] "snapshot-controller-58dbcc7b99-rtxrp" [107706c4-ff1a-41dc-8953-d46771f38f79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0115 02:47:52.979291   16000 system_pods.go:61] "storage-provisioner" [c60b8c42-24d9-4388-bc15-54d8cd10eb0d] Running
	I0115 02:47:52.979296   16000 system_pods.go:61] "tiller-deploy-7b677967b9-nbzmm" [9b81e3a3-b370-494f-9c93-3cb39b23a5fc] Running
	I0115 02:47:52.979303   16000 system_pods.go:74] duration metric: took 9.233359ms to wait for pod list to return data ...
	I0115 02:47:52.979311   16000 default_sa.go:34] waiting for default service account to be created ...
	I0115 02:47:52.982421   16000 default_sa.go:45] found service account: "default"
	I0115 02:47:52.982436   16000 default_sa.go:55] duration metric: took 3.121045ms for default service account to be created ...
	I0115 02:47:52.982442   16000 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 02:47:52.993748   16000 system_pods.go:86] 18 kube-system pods found
	I0115 02:47:52.993767   16000 system_pods.go:89] "coredns-5dd5756b68-5r94g" [bd032c75-d5d4-4ca5-baba-37e836c67a51] Running
	I0115 02:47:52.993783   16000 system_pods.go:89] "csi-hostpath-attacher-0" [d67653c4-d323-41e4-9ed3-903ede6e0715] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 02:47:52.993791   16000 system_pods.go:89] "csi-hostpath-resizer-0" [a083070c-bd53-4a91-b898-a19af6e26463] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 02:47:52.993804   16000 system_pods.go:89] "csi-hostpathplugin-lmkq2" [d2324a15-3260-4489-8097-146158413b79] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 02:47:52.993820   16000 system_pods.go:89] "etcd-addons-974059" [f8d94d1a-76d6-4afe-af6c-0d5a2d278173] Running
	I0115 02:47:52.993828   16000 system_pods.go:89] "kube-apiserver-addons-974059" [808fa781-a531-4f06-8b1d-575eaed8f7ed] Running
	I0115 02:47:52.993835   16000 system_pods.go:89] "kube-controller-manager-addons-974059" [c33696fd-dcf6-44f9-a631-539e68ed8bf3] Running
	I0115 02:47:52.993845   16000 system_pods.go:89] "kube-ingress-dns-minikube" [31b1ce58-7dbf-42d0-a6f4-2edb76873274] Running
	I0115 02:47:52.993851   16000 system_pods.go:89] "kube-proxy-bsgmf" [bb367bbf-eab9-47c5-9c65-c98dfebb6ac0] Running
	I0115 02:47:52.993859   16000 system_pods.go:89] "kube-scheduler-addons-974059" [7d3c37f8-37da-4d2e-834e-d4b9c324d5fc] Running
	I0115 02:47:52.993870   16000 system_pods.go:89] "metrics-server-7c66d45ddc-mc2hw" [46aae371-3052-4919-8103-27e76a8d869a] Running
	I0115 02:47:52.993875   16000 system_pods.go:89] "nvidia-device-plugin-daemonset-hq969" [7bed1f75-9fa1-4caa-bad7-a0809fe0e985] Running
	I0115 02:47:52.993882   16000 system_pods.go:89] "registry-lxlqs" [9e31c26e-4abb-4384-bc2e-5ea1be84e604] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0115 02:47:52.993887   16000 system_pods.go:89] "registry-proxy-5ndqf" [5c77c7c4-f5be-4480-bea1-b1f2286e3b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0115 02:47:52.993896   16000 system_pods.go:89] "snapshot-controller-58dbcc7b99-dwptx" [70860404-00b9-40d8-8203-1eee013d3134] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0115 02:47:52.993915   16000 system_pods.go:89] "snapshot-controller-58dbcc7b99-rtxrp" [107706c4-ff1a-41dc-8953-d46771f38f79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0115 02:47:52.993922   16000 system_pods.go:89] "storage-provisioner" [c60b8c42-24d9-4388-bc15-54d8cd10eb0d] Running
	I0115 02:47:52.993929   16000 system_pods.go:89] "tiller-deploy-7b677967b9-nbzmm" [9b81e3a3-b370-494f-9c93-3cb39b23a5fc] Running
	I0115 02:47:52.993937   16000 system_pods.go:126] duration metric: took 11.490685ms to wait for k8s-apps to be running ...
	I0115 02:47:52.993942   16000 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 02:47:52.993980   16000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 02:47:53.007667   16000 system_svc.go:56] duration metric: took 13.718501ms WaitForService to wait for kubelet
	I0115 02:47:53.007685   16000 kubeadm.go:576] duration metric: took 36.92291683s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 02:47:53.007700   16000 node_conditions.go:102] verifying NodePressure condition ...
	I0115 02:47:53.010066   16000 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 02:47:53.010089   16000 node_conditions.go:123] node cpu capacity is 2
	I0115 02:47:53.010100   16000 node_conditions.go:105] duration metric: took 2.396023ms to run NodePressure ...
	I0115 02:47:53.010109   16000 start.go:240] waiting for startup goroutines ...
	I0115 02:47:53.222135   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:53.226196   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:53.359696   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:53.461653   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:53.723289   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:53.726329   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:53.859413   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:53.961333   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:54.222814   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:54.226584   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:54.359357   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:54.462286   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:54.725216   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:54.732115   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:54.860577   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:54.961551   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:55.225212   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:55.228471   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:55.359847   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:55.462159   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:55.725055   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:55.727306   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:55.861811   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:55.961984   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:56.222614   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:56.227526   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:56.360023   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:56.461218   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:56.723595   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:56.727237   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:56.860435   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:56.961740   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:57.224071   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:57.227467   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:57.359986   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:57.461641   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:57.724121   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:57.727663   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:57.859953   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:57.962128   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:58.224526   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:58.229500   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:58.359680   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:58.461639   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:58.724803   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:58.727995   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:58.859962   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:58.962525   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:59.226737   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:59.227104   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:59.359531   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:59.461076   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:47:59.722389   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:47:59.727120   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:47:59.860190   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:47:59.963186   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:00.222896   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:00.226663   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:00.360574   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:00.463581   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:00.725001   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:00.727524   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:00.859380   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:00.961364   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:01.222671   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:01.226569   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:01.359530   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:01.462313   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:01.722758   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:01.726695   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:01.859257   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:01.961827   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:02.224640   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:02.229444   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:02.360664   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:02.461673   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:02.724049   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:02.727356   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:02.859654   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:02.960908   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:03.224003   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:03.226900   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:03.359702   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:03.462198   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:03.722564   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:03.726918   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:03.859923   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:03.963428   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:04.223278   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:04.226384   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:04.359830   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:04.461503   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:04.723483   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:04.726532   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:04.862250   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:04.960882   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:05.223962   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:05.227097   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:05.359915   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:05.461836   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:05.724517   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:05.726897   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:05.859753   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:05.961538   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:06.223549   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:06.226894   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:06.359525   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:06.461002   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:06.726917   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:06.731684   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:06.861101   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:06.963124   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:07.414133   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:07.420653   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:07.431629   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:07.462067   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:07.724526   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:07.727514   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:07.860639   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:07.961218   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:08.222702   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:08.226455   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:08.358745   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:08.461437   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:08.723239   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:08.726548   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:08.859293   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:08.961249   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:09.222533   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:09.226700   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:09.359933   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:09.464123   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:09.725031   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:09.727620   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:09.859055   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:09.964932   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:10.223976   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:10.227637   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:10.361391   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:10.461252   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:10.723843   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:10.728059   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:10.860505   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:10.961317   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:11.222817   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:11.228304   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:11.359742   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:11.461223   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:11.724211   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:11.728670   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:11.860346   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:11.960866   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:12.224172   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:12.227786   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:12.360728   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:12.462067   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:12.726592   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:12.728456   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:12.859450   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:12.961680   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:13.223837   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:13.227719   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:13.359454   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:13.461479   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:13.723056   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:13.726718   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:13.861136   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:13.960964   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:14.223229   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:14.226615   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:14.359994   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:14.461940   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:14.724091   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:14.727632   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:14.859596   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:14.962919   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:15.223887   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:15.228899   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:15.360460   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:15.462748   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:15.813521   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:15.815543   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:15.859369   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:15.961448   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:16.223004   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:16.227940   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:16.364786   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:16.461510   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:16.724089   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 02:48:16.728687   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:16.860093   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:16.963540   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:17.223046   16000 kapi.go:107] duration metric: took 49.005678529s to wait for kubernetes.io/minikube-addons=registry ...
	I0115 02:48:17.227022   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:17.360051   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:17.460958   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:17.728001   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:17.860097   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:17.961854   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:18.227480   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:18.359502   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:18.462120   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:18.727922   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:18.859974   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:18.963056   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:19.227818   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:19.359631   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:19.462100   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:19.727768   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:19.865397   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:19.961427   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:20.228183   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:20.360515   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:20.461723   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:20.727587   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:20.860592   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:20.961705   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:21.227872   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:21.359705   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:21.461861   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:21.727329   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:21.860260   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:21.963464   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:22.228012   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:22.360503   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:22.462973   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:22.728497   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:22.864505   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:22.962488   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:23.227969   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:23.360421   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:23.460723   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:23.727547   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:23.859122   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:23.960926   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:24.227958   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:24.360092   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:24.461560   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:24.728372   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:24.859664   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:24.961951   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:25.227853   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:25.360095   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:25.461115   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:25.727906   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:25.860638   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:25.962386   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:26.228469   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:26.359366   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:26.462309   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:26.728135   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:26.859435   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:26.961553   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:27.230737   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:27.361226   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:27.461551   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:27.729771   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:27.860336   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:27.961384   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:28.228351   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:28.360669   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:28.462683   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:28.728110   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:28.861500   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:28.961157   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:29.227373   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:29.359821   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:29.461482   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:29.728551   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:29.860902   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:29.962138   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:30.228500   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:30.360542   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:30.461368   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:30.731609   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:30.859803   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:30.972886   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:31.227303   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:31.360952   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:31.466340   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:31.727827   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:31.859548   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:31.987262   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:32.229091   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:32.362530   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:32.461065   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:32.727383   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:32.859531   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:32.973412   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:33.228151   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:33.359944   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:33.461702   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:33.727257   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:33.862075   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:33.962115   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:34.227850   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:34.359612   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:34.461439   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:34.729201   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:34.861753   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:34.972895   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:35.227495   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:35.362577   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:35.461632   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:35.728513   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:35.859700   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:35.961786   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:36.228597   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:36.359221   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:36.460975   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:36.733017   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:36.861750   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:36.961371   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:37.352227   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:37.363203   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:37.461480   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:37.729261   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:37.861227   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:37.962612   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:38.230725   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:38.361736   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:38.461014   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:38.727056   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:38.860149   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:38.961331   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:39.228211   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:39.361672   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:39.460991   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:39.727468   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:39.858908   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:39.963466   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:40.228598   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:40.359307   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:40.462371   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:40.728298   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:40.859935   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:40.962632   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:41.227650   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:41.359561   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:41.462453   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:41.727281   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:41.859726   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:41.961152   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:42.228185   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:42.359720   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:42.461626   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:42.728695   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:42.858950   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:42.962260   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:43.228023   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:43.359928   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:43.461286   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:43.728483   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:43.862035   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 02:48:43.961609   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:44.230102   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:44.361053   16000 kapi.go:107] duration metric: took 1m14.007306531s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0115 02:48:44.460629   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:44.728359   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:44.961530   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:45.228675   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:45.461814   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:45.727633   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:45.962621   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:46.227969   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:46.461794   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:46.730237   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:46.961862   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:47.227972   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:47.461435   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:47.728041   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:47.962601   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:48.228488   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:48.461972   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:48.727357   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:48.961185   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:49.228510   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:49.462491   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:49.728173   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:49.961739   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:50.228441   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:50.462385   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:50.727923   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:50.962755   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:51.228215   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:51.462067   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:51.727248   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:51.961606   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:52.227353   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:52.461634   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:52.727713   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:52.962042   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:53.227519   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:53.462102   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:53.728147   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:53.961628   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:54.229061   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:54.461941   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:54.727163   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:54.961131   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:55.227602   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:55.725386   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:55.731225   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:55.961656   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:56.227916   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:56.463945   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:56.728485   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:56.961468   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:57.226750   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:57.461019   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:57.728666   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:57.963430   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:58.228102   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:58.461355   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:58.728653   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:58.961662   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:59.228632   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:59.461987   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:48:59.727715   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:48:59.962121   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:00.227499   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:00.461893   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:00.728097   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:00.961540   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:01.227650   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:01.461939   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:01.727290   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:01.961430   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:02.227332   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:02.461333   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:02.727904   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:02.962369   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:03.228160   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:03.461433   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:03.727970   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:03.961434   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:04.227563   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:04.461717   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:04.728109   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:04.962540   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:05.227936   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:05.461302   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:05.727899   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:05.960947   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:06.227542   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:06.461637   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:06.729843   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:06.961447   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:07.227491   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:07.461924   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:07.728093   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:07.961718   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:08.227012   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:08.463106   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:08.727656   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:08.963037   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:09.230689   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:09.462616   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:09.729027   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:09.963353   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:10.228074   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:10.461323   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:10.728454   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:10.962092   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:11.227194   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:11.461946   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:11.729392   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:11.961601   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:12.227979   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:12.463692   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:12.731458   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:12.961669   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:13.228151   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:13.461083   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:13.728023   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:13.961045   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:14.228226   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:14.461359   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:14.728558   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:14.962020   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:15.227846   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:15.460761   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:15.729205   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:15.960931   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:16.227359   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:16.461633   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:16.728515   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:16.963789   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:17.228254   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:17.461269   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:17.728279   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:17.961446   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:18.228503   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:18.462845   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:18.727481   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:18.962548   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:19.228155   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:19.461350   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:19.728482   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:19.961186   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:20.228360   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:20.462367   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:20.728335   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:20.961789   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:21.229262   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:21.461244   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:21.728235   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:21.961150   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:22.228468   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:22.461274   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:22.727875   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:22.961130   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:23.227514   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:23.461851   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:23.728857   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:23.961799   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:24.227267   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:24.462384   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:24.729026   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:24.961563   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:25.229295   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:25.462022   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:25.728832   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:25.963622   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:26.228149   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:26.465617   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:26.729993   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:26.961444   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:27.228780   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:27.461882   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:27.727727   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:27.961584   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:28.228865   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:28.460802   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:28.727364   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:28.961497   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:29.228122   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:29.461111   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:29.727699   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:29.961415   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:30.229597   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:30.461761   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:30.727334   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:30.963296   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:31.228251   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:31.460773   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:31.728508   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:31.961326   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:32.228145   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:32.461287   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:32.728177   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:32.961688   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:33.228677   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:33.461820   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:33.727170   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:33.960792   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:34.227050   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:34.461388   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:34.729024   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:34.960883   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:35.227563   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:35.461807   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:35.727519   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:35.961770   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:36.228263   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:36.461060   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:36.728039   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:36.961156   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:37.228101   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:37.461376   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:37.729332   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:37.961518   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:38.227968   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:38.461405   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:38.728478   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:38.961293   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:39.228575   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:39.461818   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:39.728023   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:39.961374   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:40.227974   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:40.461062   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:40.728157   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:40.961199   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:41.227822   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:41.461166   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:41.728336   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:41.961411   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:42.227010   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:42.460617   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:42.728589   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:42.962687   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:43.228537   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:43.461342   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:43.728472   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:43.961568   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:44.228208   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:44.461965   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:44.727667   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:44.961506   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:45.228707   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:45.461789   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:45.727973   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:45.961634   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:46.228523   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:46.462256   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:46.729920   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:46.961028   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:47.626207   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:47.627199   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:47.727870   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:47.961758   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:48.227404   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:48.461204   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:48.727639   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:48.961844   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:49.227526   16000 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 02:49:49.461379   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:49.728137   16000 kapi.go:107] duration metric: took 2m21.505466778s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0115 02:49:49.962238   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:50.460946   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:50.961833   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:51.461829   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:51.961370   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:52.462040   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:52.961757   16000 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 02:49:53.461579   16000 kapi.go:107] duration metric: took 2m20.50426392s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0115 02:49:53.463115   16000 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-974059 cluster.
	I0115 02:49:53.464316   16000 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0115 02:49:53.465654   16000 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0115 02:49:53.466989   16000 out.go:177] * Enabled addons: ingress-dns, storage-provisioner-rancher, metrics-server, storage-provisioner, nvidia-device-plugin, cloud-spanner, helm-tiller, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0115 02:49:53.468394   16000 addons.go:505] duration metric: took 2m37.383447023s for enable addons: enabled=[ingress-dns storage-provisioner-rancher metrics-server storage-provisioner nvidia-device-plugin cloud-spanner helm-tiller yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0115 02:49:53.468428   16000 start.go:245] waiting for cluster config update ...
	I0115 02:49:53.468450   16000 start.go:254] writing updated cluster config ...
	I0115 02:49:53.468748   16000 ssh_runner.go:195] Run: rm -f paused
	I0115 02:49:53.519828   16000 start.go:599] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 02:49:53.522012   16000 out.go:177] * Done! kubectl is now configured to use "addons-974059" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	6f86f6e4f7c8b       dd1b12fcb6097       3 seconds ago        Running             hello-world-app              0                   0056eff4d07e8       hello-world-app-5d77478584-f4wz6
	b87ff28f3922b       529b5644c430c       16 seconds ago       Running             nginx                        0                   fa241070874b2       nginx
	e678a51e80cbb       3cb09943f099d       29 seconds ago       Running             headlamp                     0                   ebc5d5c3fb6bd       headlamp-7ddfbb94ff-6w8xh
	8c8508d775b0e       6d2a98b274382       About a minute ago   Running             gcp-auth                     0                   ec2f0a0622d5c       gcp-auth-d4c87556c-47sb5
	e98033c0ffe2f       1ebff0f9671bc       2 minutes ago        Exited              patch                        0                   6ad62f927ae69       ingress-nginx-admission-patch-dl67x
	7762d2e390484       1ebff0f9671bc       2 minutes ago        Exited              create                       0                   5eb5322aa3daa       ingress-nginx-admission-create-q5mf4
	e64a3e28cafb9       aa61ee9c70bc4       2 minutes ago        Exited              volume-snapshot-controller   0                   98be54d5bcdfd       snapshot-controller-58dbcc7b99-rtxrp
	a23882cc16386       31de47c733c91       3 minutes ago        Running             yakd                         0                   688bfc41ac888       yakd-dashboard-9947fc6bf-dc8lk
	243dc64d1a3af       1499ed4fbd0aa       3 minutes ago        Running             minikube-ingress-dns         0                   acbdee7e89915       kube-ingress-dns-minikube
	e03f686224360       6e38f40d628db       3 minutes ago        Running             storage-provisioner          0                   bba80ae5699ca       storage-provisioner
	4e676ddae82e0       ead0a4a53df89       3 minutes ago        Running             coredns                      0                   35d902051c4a8       coredns-5dd5756b68-5r94g
	1057fe670ce0b       83f6cc407eed8       3 minutes ago        Running             kube-proxy                   0                   d70bf889ad37e       kube-proxy-bsgmf
	d390a03d34eea       e3db313c6dbc0       4 minutes ago        Running             kube-scheduler               0                   6b2becd2d6a13       kube-scheduler-addons-974059
	bae5562ac6145       73deb9a3f7025       4 minutes ago        Running             etcd                         0                   3daea400718cb       etcd-addons-974059
	fb3c158429e51       7fe0e6f37db33       4 minutes ago        Running             kube-apiserver               0                   bb6da08c0972e       kube-apiserver-addons-974059
	0a4842b7d69c8       d058aa5ab969c       4 minutes ago        Running             kube-controller-manager      0                   8d82c3485130f       kube-controller-manager-addons-974059
	
	
	==> containerd <==
	-- Journal begins at Mon 2024-01-15 02:46:29 UTC, ends at Mon 2024-01-15 02:51:13 UTC. --
	Jan 15 02:51:08 addons-974059 containerd[689]: time="2024-01-15T02:51:08.954404429Z" level=info msg="shim disconnected" id=54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4 namespace=k8s.io
	Jan 15 02:51:08 addons-974059 containerd[689]: time="2024-01-15T02:51:08.954627148Z" level=warning msg="cleaning up after shim disconnected" id=54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4 namespace=k8s.io
	Jan 15 02:51:08 addons-974059 containerd[689]: time="2024-01-15T02:51:08.954812076Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 15 02:51:08 addons-974059 containerd[689]: time="2024-01-15T02:51:08.978128919Z" level=info msg="StopContainer for \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\" returns successfully"
	Jan 15 02:51:08 addons-974059 containerd[689]: time="2024-01-15T02:51:08.978839026Z" level=info msg="StopPodSandbox for \"5811913590011b3cc9700c0b6eef7c6b53113f121e888194b01bc70ec1305d8a\""
	Jan 15 02:51:08 addons-974059 containerd[689]: time="2024-01-15T02:51:08.978924583Z" level=info msg="Container to stop \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.020794647Z" level=info msg="shim disconnected" id=5811913590011b3cc9700c0b6eef7c6b53113f121e888194b01bc70ec1305d8a namespace=k8s.io
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.020860896Z" level=warning msg="cleaning up after shim disconnected" id=5811913590011b3cc9700c0b6eef7c6b53113f121e888194b01bc70ec1305d8a namespace=k8s.io
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.020876500Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.101406319Z" level=info msg="TearDown network for sandbox \"5811913590011b3cc9700c0b6eef7c6b53113f121e888194b01bc70ec1305d8a\" successfully"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.101507625Z" level=info msg="StopPodSandbox for \"5811913590011b3cc9700c0b6eef7c6b53113f121e888194b01bc70ec1305d8a\" returns successfully"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.200641295Z" level=info msg="RemoveContainer for \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\""
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.206488700Z" level=info msg="RemoveContainer for \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\" returns successfully"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.207045808Z" level=error msg="ContainerStatus for \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\": not found"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.770757256Z" level=info msg="ImageCreate event name:\"gcr.io/google-samples/hello-app:1.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.772322933Z" level=info msg="stop pulling image gcr.io/google-samples/hello-app:1.0: active requests=0, bytes read=12772065"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.774210272Z" level=info msg="ImageCreate event name:\"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.784888518Z" level=info msg="ImageUpdate event name:\"gcr.io/google-samples/hello-app:1.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.786046104Z" level=info msg="ImageCreate event name:\"gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.786810379Z" level=info msg="Pulled image \"gcr.io/google-samples/hello-app:1.0\" with image id \"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\", repo tag \"gcr.io/google-samples/hello-app:1.0\", repo digest \"gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7\", size \"13745365\" in 3.815455374s"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.786893334Z" level=info msg="PullImage \"gcr.io/google-samples/hello-app:1.0\" returns image reference \"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\""
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.792746282Z" level=info msg="CreateContainer within sandbox \"0056eff4d07e813bff9cfd5981588c296303761991e57fecfb9f1fe181571232\" for container &ContainerMetadata{Name:hello-world-app,Attempt:0,}"
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.814441202Z" level=info msg="CreateContainer within sandbox \"0056eff4d07e813bff9cfd5981588c296303761991e57fecfb9f1fe181571232\" for &ContainerMetadata{Name:hello-world-app,Attempt:0,} returns container id \"6f86f6e4f7c8b957c01eb0a7a0a39503cc0683c0e27c566c14461ab26c2b30c5\""
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.815551126Z" level=info msg="StartContainer for \"6f86f6e4f7c8b957c01eb0a7a0a39503cc0683c0e27c566c14461ab26c2b30c5\""
	Jan 15 02:51:09 addons-974059 containerd[689]: time="2024-01-15T02:51:09.907031304Z" level=info msg="StartContainer for \"6f86f6e4f7c8b957c01eb0a7a0a39503cc0683c0e27c566c14461ab26c2b30c5\" returns successfully"
	
	
	==> coredns [4e676ddae82e04169a2622224bec5cc6f002644787ce0301d814a8d4197c0308] <==
	[INFO] 10.244.0.21:40396 - 53771 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000055251s
	[INFO] 10.244.0.21:36650 - 57461 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000110072s
	[INFO] 10.244.0.21:40396 - 32757 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035455s
	[INFO] 10.244.0.21:36650 - 46078 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000092896s
	[INFO] 10.244.0.21:40396 - 16310 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030862s
	[INFO] 10.244.0.21:36650 - 23476 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080224s
	[INFO] 10.244.0.21:40396 - 2940 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030847s
	[INFO] 10.244.0.21:36650 - 34091 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077308s
	[INFO] 10.244.0.21:40396 - 11275 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028178s
	[INFO] 10.244.0.21:36650 - 4345 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074837s
	[INFO] 10.244.0.21:40396 - 28951 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078541s
	[INFO] 10.244.0.21:39688 - 11966 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000077995s
	[INFO] 10.244.0.21:39688 - 33684 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083801s
	[INFO] 10.244.0.21:42154 - 568 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040103s
	[INFO] 10.244.0.21:42154 - 44818 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044962s
	[INFO] 10.244.0.21:42154 - 6900 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038946s
	[INFO] 10.244.0.21:39688 - 13372 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000123888s
	[INFO] 10.244.0.21:42154 - 50345 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041051s
	[INFO] 10.244.0.21:39688 - 27768 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058996s
	[INFO] 10.244.0.21:39688 - 41461 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077014s
	[INFO] 10.244.0.21:39688 - 44068 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064112s
	[INFO] 10.244.0.21:42154 - 32489 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003776s
	[INFO] 10.244.0.21:42154 - 12310 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034069s
	[INFO] 10.244.0.21:39688 - 328 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049901s
	[INFO] 10.244.0.21:42154 - 2079 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000118171s
	
	
	==> describe nodes <==
	Name:               addons-974059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-974059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=addons-974059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T02_47_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-974059
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 02:47:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-974059
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 02:51:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 02:51:08 +0000   Mon, 15 Jan 2024 02:46:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 02:51:08 +0000   Mon, 15 Jan 2024 02:46:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 02:51:08 +0000   Mon, 15 Jan 2024 02:46:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 02:51:08 +0000   Mon, 15 Jan 2024 02:47:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    addons-974059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 88c79f973e514800add567338233a1bb
	  System UUID:                88c79f97-3e51-4800-add5-67338233a1bb
	  Boot ID:                    d1674b89-05fb-4b7f-9561-8461d1d35bbe
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-f4wz6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  gcp-auth                    gcp-auth-d4c87556c-47sb5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  headlamp                    headlamp-7ddfbb94ff-6w8xh                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 coredns-5dd5756b68-5r94g                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m57s
	  kube-system                 etcd-addons-974059                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-apiserver-addons-974059             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-addons-974059    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-proxy-bsgmf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-scheduler-addons-974059             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-dc8lk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m57s  kube-proxy       
	  Normal  Starting                 4m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet          Node addons-974059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet          Node addons-974059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet          Node addons-974059 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m10s  kubelet          Node addons-974059 status is now: NodeReady
	  Normal  RegisteredNode           3m58s  node-controller  Node addons-974059 event: Registered Node addons-974059 in Controller
	
	
	==> dmesg <==
	[  +0.101104] systemd-fstab-generator[567]: Ignoring "noauto" for root device
	[  +0.132238] systemd-fstab-generator[580]: Ignoring "noauto" for root device
	[  +0.091494] systemd-fstab-generator[591]: Ignoring "noauto" for root device
	[  +0.216000] systemd-fstab-generator[619]: Ignoring "noauto" for root device
	[  +6.486240] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.590366] systemd-fstab-generator[732]: Ignoring "noauto" for root device
	[  +4.689643] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[Jan15 02:47] systemd-fstab-generator[1275]: Ignoring "noauto" for root device
	[ +12.989479] systemd-fstab-generator[1464]: Ignoring "noauto" for root device
	[  +5.596035] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.019404] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.032427] kauditd_printk_skb: 27 callbacks suppressed
	[ +18.920535] kauditd_printk_skb: 13 callbacks suppressed
	[Jan15 02:48] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.002344] kauditd_printk_skb: 26 callbacks suppressed
	[Jan15 02:49] kauditd_printk_skb: 18 callbacks suppressed
	[ +17.849023] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.124354] kauditd_printk_skb: 3 callbacks suppressed
	[Jan15 02:50] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.581689] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.768602] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.365292] kauditd_printk_skb: 4 callbacks suppressed
	[ +28.314236] kauditd_printk_skb: 9 callbacks suppressed
	[Jan15 02:51] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.924868] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [bae5562ac614519a8a767489b0eeac5f57f76a5f8dcd880f67b35256a93d6f7d] <==
	{"level":"warn","ts":"2024-01-15T02:48:37.341951Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T02:48:37.038813Z","time spent":"303.049521ms","remote":"127.0.0.1:42056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3675,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/statefulsets/kube-system/csi-hostpath-resizer\" mod_revision:784 > success:<request_put:<key:\"/registry/statefulsets/kube-system/csi-hostpath-resizer\" value_size:3612 >> failure:<request_range:<key:\"/registry/statefulsets/kube-system/csi-hostpath-resizer\" > >"}
	{"level":"warn","ts":"2024-01-15T02:48:37.342085Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.700827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/gcp-auth/gcp-auth-certs\" ","response":"range_response_count:1 size:1742"}
	{"level":"info","ts":"2024-01-15T02:48:37.342136Z","caller":"traceutil/trace.go:171","msg":"trace[475494016] range","detail":"{range_begin:/registry/secrets/gcp-auth/gcp-auth-certs; range_end:; response_count:1; response_revision:1063; }","duration":"250.749874ms","start":"2024-01-15T02:48:37.091377Z","end":"2024-01-15T02:48:37.342126Z","steps":["trace[475494016] 'agreement among raft nodes before linearized reading'  (duration: 250.676466ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T02:48:37.345067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.706871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13782"}
	{"level":"info","ts":"2024-01-15T02:48:37.345126Z","caller":"traceutil/trace.go:171","msg":"trace[1210338857] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1064; }","duration":"123.769965ms","start":"2024-01-15T02:48:37.221348Z","end":"2024-01-15T02:48:37.345118Z","steps":["trace[1210338857] 'agreement among raft nodes before linearized reading'  (duration: 123.65989ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T02:48:55.719017Z","caller":"traceutil/trace.go:171","msg":"trace[1334465249] linearizableReadLoop","detail":"{readStateIndex:1182; appliedIndex:1181; }","duration":"262.612497ms","start":"2024-01-15T02:48:55.45639Z","end":"2024-01-15T02:48:55.719003Z","steps":["trace[1334465249] 'read index received'  (duration: 262.401433ms)","trace[1334465249] 'applied index is now lower than readState.Index'  (duration: 210.168µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T02:48:55.719153Z","caller":"traceutil/trace.go:171","msg":"trace[1489170511] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"355.264764ms","start":"2024-01-15T02:48:55.363867Z","end":"2024-01-15T02:48:55.719132Z","steps":["trace[1489170511] 'process raft request'  (duration: 354.96395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T02:48:55.719357Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T02:48:55.363847Z","time spent":"355.332719ms","remote":"127.0.0.1:42012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-974059\" mod_revision:1126 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-974059\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-974059\" > >"}
	{"level":"warn","ts":"2024-01-15T02:48:55.719486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.106168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-01-15T02:48:55.719531Z","caller":"traceutil/trace.go:171","msg":"trace[1010372236] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1145; }","duration":"133.158102ms","start":"2024-01-15T02:48:55.586366Z","end":"2024-01-15T02:48:55.719524Z","steps":["trace[1010372236] 'agreement among raft nodes before linearized reading'  (duration: 133.042436ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T02:48:55.719619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.243831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10869"}
	{"level":"info","ts":"2024-01-15T02:48:55.719673Z","caller":"traceutil/trace.go:171","msg":"trace[1401556594] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1145; }","duration":"263.296258ms","start":"2024-01-15T02:48:55.456368Z","end":"2024-01-15T02:48:55.719664Z","steps":["trace[1401556594] 'agreement among raft nodes before linearized reading'  (duration: 263.21038ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T02:49:47.617084Z","caller":"traceutil/trace.go:171","msg":"trace[1017267312] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1278; }","duration":"397.207648ms","start":"2024-01-15T02:49:47.219858Z","end":"2024-01-15T02:49:47.617066Z","steps":["trace[1017267312] 'read index received'  (duration: 397.105993ms)","trace[1017267312] 'applied index is now lower than readState.Index'  (duration: 100.857µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T02:49:47.617394Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"397.54889ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13782"}
	{"level":"info","ts":"2024-01-15T02:49:47.61763Z","caller":"traceutil/trace.go:171","msg":"trace[811197462] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1230; }","duration":"397.809791ms","start":"2024-01-15T02:49:47.21981Z","end":"2024-01-15T02:49:47.61762Z","steps":["trace[811197462] 'agreement among raft nodes before linearized reading'  (duration: 397.418573ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T02:49:47.617768Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T02:49:47.219796Z","time spent":"397.954214ms","remote":"127.0.0.1:41992","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13806,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-01-15T02:49:47.617563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.523501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-01-15T02:49:47.618328Z","caller":"traceutil/trace.go:171","msg":"trace[437493409] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1230; }","duration":"278.286485ms","start":"2024-01-15T02:49:47.34003Z","end":"2024-01-15T02:49:47.618316Z","steps":["trace[437493409] 'agreement among raft nodes before linearized reading'  (duration: 277.487138ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T02:49:47.617593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.146473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2024-01-15T02:49:47.620131Z","caller":"traceutil/trace.go:171","msg":"trace[711180062] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1230; }","duration":"165.678012ms","start":"2024-01-15T02:49:47.454442Z","end":"2024-01-15T02:49:47.62012Z","steps":["trace[711180062] 'agreement among raft nodes before linearized reading'  (duration: 163.134653ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T02:50:16.143611Z","caller":"traceutil/trace.go:171","msg":"trace[1192713084] linearizableReadLoop","detail":"{readStateIndex:1532; appliedIndex:1531; }","duration":"118.192748ms","start":"2024-01-15T02:50:16.025404Z","end":"2024-01-15T02:50:16.143597Z","steps":["trace[1192713084] 'read index received'  (duration: 118.006949ms)","trace[1192713084] 'applied index is now lower than readState.Index'  (duration: 184.956µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T02:50:16.143764Z","caller":"traceutil/trace.go:171","msg":"trace[845639868] transaction","detail":"{read_only:false; response_revision:1474; number_of_response:1; }","duration":"200.694957ms","start":"2024-01-15T02:50:15.943062Z","end":"2024-01-15T02:50:16.143757Z","steps":["trace[845639868] 'process raft request'  (duration: 200.373084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T02:50:16.143965Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.593368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2024-01-15T02:50:16.143989Z","caller":"traceutil/trace.go:171","msg":"trace[342502323] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1474; }","duration":"118.630856ms","start":"2024-01-15T02:50:16.025353Z","end":"2024-01-15T02:50:16.143983Z","steps":["trace[342502323] 'agreement among raft nodes before linearized reading'  (duration: 118.53601ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T02:50:37.933652Z","caller":"traceutil/trace.go:171","msg":"trace[1658722627] transaction","detail":"{read_only:false; response_revision:1583; number_of_response:1; }","duration":"104.620883ms","start":"2024-01-15T02:50:37.828906Z","end":"2024-01-15T02:50:37.933527Z","steps":["trace[1658722627] 'process raft request'  (duration: 104.432748ms)"],"step_count":1}
	
	
	==> gcp-auth [8c8508d775b0eb90e212951db114a883b3673b62aa46755ddbc038cca74b55f0] <==
	2024/01/15 02:49:52 GCP Auth Webhook started!
	2024/01/15 02:49:53 Ready to marshal response ...
	2024/01/15 02:49:53 Ready to write response ...
	2024/01/15 02:49:53 Ready to marshal response ...
	2024/01/15 02:49:53 Ready to write response ...
	2024/01/15 02:50:03 Ready to marshal response ...
	2024/01/15 02:50:03 Ready to write response ...
	2024/01/15 02:50:07 Ready to marshal response ...
	2024/01/15 02:50:07 Ready to write response ...
	2024/01/15 02:50:08 Ready to marshal response ...
	2024/01/15 02:50:08 Ready to write response ...
	2024/01/15 02:50:11 Ready to marshal response ...
	2024/01/15 02:50:11 Ready to write response ...
	2024/01/15 02:50:11 Ready to marshal response ...
	2024/01/15 02:50:11 Ready to write response ...
	2024/01/15 02:50:12 Ready to marshal response ...
	2024/01/15 02:50:12 Ready to write response ...
	2024/01/15 02:50:26 Ready to marshal response ...
	2024/01/15 02:50:26 Ready to write response ...
	2024/01/15 02:50:28 Ready to marshal response ...
	2024/01/15 02:50:28 Ready to write response ...
	2024/01/15 02:50:51 Ready to marshal response ...
	2024/01/15 02:50:51 Ready to write response ...
	2024/01/15 02:51:03 Ready to marshal response ...
	2024/01/15 02:51:03 Ready to write response ...
	
	
	==> kernel <==
	 02:51:14 up 4 min,  0 users,  load average: 1.95, 1.49, 0.70
	Linux addons-974059 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fb3c158429e5185648047a46de5c8674935a9af0ec4c24ac882c08edb713f1b2] <==
	I0115 02:50:51.498855       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.184.130"}
	I0115 02:50:53.802006       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0115 02:50:56.510762       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0115 02:50:56.528066       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0115 02:50:57.544931       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0115 02:51:03.977341       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 02:51:03.977404       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 02:51:03.997363       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 02:51:03.997442       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 02:51:04.011628       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 02:51:04.011671       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 02:51:04.034123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 02:51:04.034180       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 02:51:04.039760       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 02:51:04.040006       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 02:51:04.066941       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 02:51:04.067128       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 02:51:04.156648       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 02:51:04.156791       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 02:51:04.200015       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 02:51:04.200084       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 02:51:04.271185       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.89.192"}
	W0115 02:51:05.041037       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0115 02:51:05.158036       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0115 02:51:05.260085       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [0a4842b7d69c8b1a3b0b9d302906bdabe63faefcd917dcd5f36e5123788e9053] <==
	E0115 02:51:05.043944       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:05.167165       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:05.263824       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 02:51:05.813467       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0115 02:51:05.818913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="6.209µs"
	I0115 02:51:05.825926       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0115 02:51:06.223287       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:06.223314       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 02:51:06.317666       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:06.317788       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 02:51:06.375193       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:06.375249       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 02:51:06.624876       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0115 02:51:08.510604       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:08.510643       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 02:51:08.834324       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:08.834353       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 02:51:09.269010       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:09.269035       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 02:51:10.225777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.047322ms"
	I0115 02:51:10.226179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.685µs"
	W0115 02:51:12.287313       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:12.287339       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 02:51:13.387320       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 02:51:13.387353       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [1057fe670ce0bc6466d5a6f0a0b29edd119004ea97c55d7e25d59a2096f98260] <==
	I0115 02:47:16.756437       1 server_others.go:69] "Using iptables proxy"
	I0115 02:47:16.775361       1 node.go:141] Successfully retrieved node IP: 192.168.39.115
	I0115 02:47:16.843442       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 02:47:16.843505       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 02:47:16.846432       1 server_others.go:152] "Using iptables Proxier"
	I0115 02:47:16.846513       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 02:47:16.846920       1 server.go:846] "Version info" version="v1.28.4"
	I0115 02:47:16.846932       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 02:47:16.847625       1 config.go:188] "Starting service config controller"
	I0115 02:47:16.847641       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 02:47:16.847785       1 config.go:97] "Starting endpoint slice config controller"
	I0115 02:47:16.847792       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 02:47:16.848146       1 config.go:315] "Starting node config controller"
	I0115 02:47:16.848150       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 02:47:16.948515       1 shared_informer.go:318] Caches are synced for node config
	I0115 02:47:16.948537       1 shared_informer.go:318] Caches are synced for service config
	I0115 02:47:16.948555       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d390a03d34eead7667d56219b08905278e7ed7f56ec5f4c7ecd6c6e6fb0da398] <==
	W0115 02:47:00.277513       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 02:47:00.277962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 02:47:00.278762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 02:47:00.278803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 02:47:00.278839       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 02:47:00.278873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 02:47:00.278887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0115 02:47:00.280355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0115 02:47:00.280376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 02:47:00.280440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 02:47:00.282321       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 02:47:00.282442       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 02:47:01.172492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 02:47:01.172570       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 02:47:01.306486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 02:47:01.306789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 02:47:01.330888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 02:47:01.330938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0115 02:47:01.471473       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0115 02:47:01.471525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0115 02:47:01.531074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 02:47:01.531125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 02:47:01.536212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 02:47:01.536261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0115 02:47:01.949847       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 02:46:29 UTC, ends at Mon 2024-01-15 02:51:14 UTC. --
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.108886    1282 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70860404-00b9-40d8-8203-1eee013d3134-kube-api-access-ntdv2" (OuterVolumeSpecName: "kube-api-access-ntdv2") pod "70860404-00b9-40d8-8203-1eee013d3134" (UID: "70860404-00b9-40d8-8203-1eee013d3134"). InnerVolumeSpecName "kube-api-access-ntdv2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.147445    1282 scope.go:117] "RemoveContainer" containerID="651aaef8a44a84a02a9045c2565f123bad9f45ce2342696ce00375249d282919"
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.177036    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98be54d5bcdfd3cda2e4c11315b422f73f4b622cbbe794130859cf006a9b3d38"
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.202553    1282 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2tnv\" (UniqueName: \"kubernetes.io/projected/107706c4-ff1a-41dc-8953-d46771f38f79-kube-api-access-x2tnv\") pod \"107706c4-ff1a-41dc-8953-d46771f38f79\" (UID: \"107706c4-ff1a-41dc-8953-d46771f38f79\") "
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.204672    1282 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ntdv2\" (UniqueName: \"kubernetes.io/projected/70860404-00b9-40d8-8203-1eee013d3134-kube-api-access-ntdv2\") on node \"addons-974059\" DevicePath \"\""
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.207482    1282 scope.go:117] "RemoveContainer" containerID="651aaef8a44a84a02a9045c2565f123bad9f45ce2342696ce00375249d282919"
	Jan 15 02:51:05 addons-974059 kubelet[1282]: E0115 02:51:05.208931    1282 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"651aaef8a44a84a02a9045c2565f123bad9f45ce2342696ce00375249d282919\": not found" containerID="651aaef8a44a84a02a9045c2565f123bad9f45ce2342696ce00375249d282919"
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.209050    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"651aaef8a44a84a02a9045c2565f123bad9f45ce2342696ce00375249d282919"} err="failed to get container status \"651aaef8a44a84a02a9045c2565f123bad9f45ce2342696ce00375249d282919\": rpc error: code = NotFound desc = an error occurred when try to find container \"651aaef8a44a84a02a9045c2565f123bad9f45ce2342696ce00375249d282919\": not found"
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.214914    1282 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107706c4-ff1a-41dc-8953-d46771f38f79-kube-api-access-x2tnv" (OuterVolumeSpecName: "kube-api-access-x2tnv") pod "107706c4-ff1a-41dc-8953-d46771f38f79" (UID: "107706c4-ff1a-41dc-8953-d46771f38f79"). InnerVolumeSpecName "kube-api-access-x2tnv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.305421    1282 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x2tnv\" (UniqueName: \"kubernetes.io/projected/107706c4-ff1a-41dc-8953-d46771f38f79-kube-api-access-x2tnv\") on node \"addons-974059\" DevicePath \"\""
	Jan 15 02:51:05 addons-974059 kubelet[1282]: I0115 02:51:05.525397    1282 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="70860404-00b9-40d8-8203-1eee013d3134" path="/var/lib/kubelet/pods/70860404-00b9-40d8-8203-1eee013d3134/volumes"
	Jan 15 02:51:07 addons-974059 kubelet[1282]: I0115 02:51:07.527161    1282 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="107706c4-ff1a-41dc-8953-d46771f38f79" path="/var/lib/kubelet/pods/107706c4-ff1a-41dc-8953-d46771f38f79/volumes"
	Jan 15 02:51:07 addons-974059 kubelet[1282]: I0115 02:51:07.528022    1282 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8ef90e59-295a-4cfb-ac75-feba01417ab3" path="/var/lib/kubelet/pods/8ef90e59-295a-4cfb-ac75-feba01417ab3/volumes"
	Jan 15 02:51:07 addons-974059 kubelet[1282]: I0115 02:51:07.528459    1282 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ea562737-07ed-4e95-8a36-e52a06bee830" path="/var/lib/kubelet/pods/ea562737-07ed-4e95-8a36-e52a06bee830/volumes"
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.146988    1282 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21937e1b-1407-45e6-be14-6c6fd51196b5-webhook-cert\") pod \"21937e1b-1407-45e6-be14-6c6fd51196b5\" (UID: \"21937e1b-1407-45e6-be14-6c6fd51196b5\") "
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.147384    1282 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxjxq\" (UniqueName: \"kubernetes.io/projected/21937e1b-1407-45e6-be14-6c6fd51196b5-kube-api-access-sxjxq\") pod \"21937e1b-1407-45e6-be14-6c6fd51196b5\" (UID: \"21937e1b-1407-45e6-be14-6c6fd51196b5\") "
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.157238    1282 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21937e1b-1407-45e6-be14-6c6fd51196b5-kube-api-access-sxjxq" (OuterVolumeSpecName: "kube-api-access-sxjxq") pod "21937e1b-1407-45e6-be14-6c6fd51196b5" (UID: "21937e1b-1407-45e6-be14-6c6fd51196b5"). InnerVolumeSpecName "kube-api-access-sxjxq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.157663    1282 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21937e1b-1407-45e6-be14-6c6fd51196b5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "21937e1b-1407-45e6-be14-6c6fd51196b5" (UID: "21937e1b-1407-45e6-be14-6c6fd51196b5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.197082    1282 scope.go:117] "RemoveContainer" containerID="54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4"
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.206834    1282 scope.go:117] "RemoveContainer" containerID="54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4"
	Jan 15 02:51:09 addons-974059 kubelet[1282]: E0115 02:51:09.207277    1282 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\": not found" containerID="54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4"
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.207413    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4"} err="failed to get container status \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"54fd157897e6ec803fd0c0029af557777b5aedef4aeb74ffa08118f4233031f4\": not found"
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.248142    1282 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sxjxq\" (UniqueName: \"kubernetes.io/projected/21937e1b-1407-45e6-be14-6c6fd51196b5-kube-api-access-sxjxq\") on node \"addons-974059\" DevicePath \"\""
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.248303    1282 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21937e1b-1407-45e6-be14-6c6fd51196b5-webhook-cert\") on node \"addons-974059\" DevicePath \"\""
	Jan 15 02:51:09 addons-974059 kubelet[1282]: I0115 02:51:09.525778    1282 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="21937e1b-1407-45e6-be14-6c6fd51196b5" path="/var/lib/kubelet/pods/21937e1b-1407-45e6-be14-6c6fd51196b5/volumes"
	
	
	==> storage-provisioner [e03f6862243608b3fc34c7addea06c86ebc8aebc6dcce3df79d6eba6a2e8f066] <==
	I0115 02:47:30.668419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 02:47:30.736620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 02:47:30.740419       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 02:47:30.768110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 02:47:30.770266       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-974059_95ebdbcf-e3f0-480e-922b-a64b1a7ed80b!
	I0115 02:47:30.772867       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"42448d47-310f-4939-8a87-53b02e7fe474", APIVersion:"v1", ResourceVersion:"786", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-974059_95ebdbcf-e3f0-480e-922b-a64b1a7ed80b became leader
	I0115 02:47:30.873974       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-974059_95ebdbcf-e3f0-480e-922b-a64b1a7ed80b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-974059 -n addons-974059
helpers_test.go:261: (dbg) Run:  kubectl --context addons-974059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (23.77s)

                                                
                                    
x
+
TestHA/serial/StopSecondaryNode (81.78s)

                                                
                                                
=== RUN   TestHA/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 node stop m02 -v=7 --alsologtostderr
E0115 03:05:18.386699   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-680410 node stop m02 -v=7 --alsologtostderr: (1m0.305701599s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 3 (19.098670654s)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:05:55.568159   27951 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:05:55.568261   27951 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:05:55.568268   27951 out.go:309] Setting ErrFile to fd 2...
	I0115 03:05:55.568273   27951 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:05:55.568459   27951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:05:55.568622   27951 out.go:303] Setting JSON to false
	I0115 03:05:55.568655   27951 mustload.go:65] Loading cluster: ha-680410
	I0115 03:05:55.568791   27951 notify.go:220] Checking for updates...
	I0115 03:05:55.569015   27951 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:05:55.569028   27951 status.go:255] checking status of ha-680410 ...
	I0115 03:05:55.569426   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:05:55.569488   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:05:55.587860   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0115 03:05:55.588258   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:05:55.588715   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:05:55.588737   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:05:55.589128   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:05:55.589389   27951 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:05:55.591011   27951 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:05:55.591043   27951 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:05:55.591476   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:05:55.591531   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:05:55.605440   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0115 03:05:55.605858   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:05:55.606309   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:05:55.606328   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:05:55.606694   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:05:55.606926   27951 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:05:55.609770   27951 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:05:55.610275   27951 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:05:55.610308   27951 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:05:55.610441   27951 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:05:55.610765   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:05:55.610816   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:05:55.623884   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40057
	I0115 03:05:55.624231   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:05:55.624642   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:05:55.624662   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:05:55.624991   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:05:55.625144   27951 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:05:55.625356   27951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:05:55.625377   27951 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:05:55.627976   27951 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:05:55.628343   27951 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:05:55.628378   27951 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:05:55.628539   27951 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:05:55.628673   27951 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:05:55.628820   27951 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:05:55.628927   27951 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:05:55.724487   27951 ssh_runner.go:195] Run: systemctl --version
	I0115 03:05:55.731419   27951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:05:55.745355   27951 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:05:55.745385   27951 api_server.go:166] Checking apiserver status ...
	I0115 03:05:55.745430   27951 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:05:55.758270   27951 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:05:55.775620   27951 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:05:55.775704   27951 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:05:55.784725   27951 api_server.go:204] freezer state: "THAWED"
	I0115 03:05:55.784754   27951 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:05:55.790694   27951 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:05:55.790712   27951 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:05:55.790720   27951 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:05:55.790743   27951 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:05:55.791043   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:05:55.791077   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:05:55.805533   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0115 03:05:55.805896   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:05:55.806339   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:05:55.806382   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:05:55.806752   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:05:55.806923   27951 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:05:55.808571   27951 status.go:330] ha-680410-m02 host status = "Running" (err=<nil>)
	I0115 03:05:55.808592   27951 host.go:66] Checking if "ha-680410-m02" exists ...
	I0115 03:05:55.808907   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:05:55.808950   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:05:55.822807   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0115 03:05:55.823157   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:05:55.823637   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:05:55.823664   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:05:55.823946   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:05:55.824122   27951 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 03:05:55.826946   27951 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:05:55.827467   27951 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 03:05:55.827494   27951 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:05:55.827607   27951 host.go:66] Checking if "ha-680410-m02" exists ...
	I0115 03:05:55.827879   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:05:55.827926   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:05:55.842083   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0115 03:05:55.842429   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:05:55.842849   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:05:55.842875   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:05:55.843155   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:05:55.843335   27951 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 03:05:55.843552   27951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:05:55.843570   27951 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 03:05:55.846015   27951 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:05:55.846429   27951 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 03:05:55.846463   27951 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:05:55.846585   27951 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 03:05:55.846735   27951 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 03:05:55.846871   27951 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 03:05:55.846985   27951 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	W0115 03:06:14.243605   27951 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.178:22: connect: no route to host
	W0115 03:06:14.243734   27951 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	E0115 03:06:14.243758   27951 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	I0115 03:06:14.243767   27951 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0115 03:06:14.243790   27951 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	I0115 03:06:14.243801   27951 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:14.244254   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:14.244307   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:14.258340   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I0115 03:06:14.258702   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:14.259076   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:06:14.259097   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:14.259488   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:14.259679   27951 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:14.261185   27951 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:14.261203   27951 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:14.261523   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:14.261554   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:14.275826   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I0115 03:06:14.276183   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:14.276600   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:06:14.276619   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:14.276863   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:14.277010   27951 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:14.279534   27951 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:14.279934   27951 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:14.279960   27951 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:14.280092   27951 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:14.280380   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:14.280417   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:14.293631   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0115 03:06:14.294017   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:14.294437   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:06:14.294448   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:14.294673   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:14.294826   27951 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:14.295038   27951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:14.295057   27951 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:14.297527   27951 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:14.297938   27951 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:14.297966   27951 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:14.298087   27951 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:14.298270   27951 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:14.298399   27951 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:14.298513   27951 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:14.395694   27951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:14.412418   27951 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:14.412444   27951 api_server.go:166] Checking apiserver status ...
	I0115 03:06:14.412479   27951 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:14.425672   27951 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:14.434732   27951 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:14.434778   27951 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:14.444221   27951 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:14.444246   27951 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:14.451721   27951 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:14.451745   27951 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:14.451767   27951 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:14.451792   27951 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:14.452055   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:14.452093   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:14.466330   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42451
	I0115 03:06:14.466663   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:14.467076   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:06:14.467095   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:14.467358   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:14.467577   27951 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:14.469035   27951 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:14.469049   27951 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:14.469338   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:14.469375   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:14.483750   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0115 03:06:14.484196   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:14.484603   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:06:14.484628   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:14.484976   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:14.485173   27951 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:14.487640   27951 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:14.488052   27951 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:14.488090   27951 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:14.488207   27951 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:14.488475   27951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:14.488510   27951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:14.503206   27951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35823
	I0115 03:06:14.503604   27951 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:14.503975   27951 main.go:141] libmachine: Using API Version  1
	I0115 03:06:14.503996   27951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:14.504307   27951 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:14.504457   27951 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:14.504625   27951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:14.504641   27951 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:14.506986   27951 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:14.507335   27951 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:14.507368   27951 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:14.507497   27951 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:14.507657   27951 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:14.507806   27951 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:14.507936   27951 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:14.594978   27951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:14.608355   27951 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-680410 -n ha-680410
helpers_test.go:244: <<< TestHA/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestHA/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-680410 logs -n 25: (1.516164797s)
helpers_test.go:252: TestHA/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                Args                                |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /tmp/TestHAserialCopyFile2725737547/001/cp-test_ha-680410-m03.txt  |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410:/home/docker/cp-test_ha-680410-m03_ha-680410.txt         |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410 sudo cat                                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m03_ha-680410.txt                   |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m02:/home/docker/cp-test_ha-680410-m03_ha-680410-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410-m02 sudo cat                            | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m03_ha-680410-m02.txt               |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04:/home/docker/cp-test_ha-680410-m03_ha-680410-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410-m04 sudo cat                            | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m03_ha-680410-m04.txt               |           |         |         |                     |                     |
	| cp      | ha-680410 cp testdata/cp-test.txt                                  | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04:/home/docker/cp-test.txt                             |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /tmp/TestHAserialCopyFile2725737547/001/cp-test_ha-680410-m04.txt  |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410:/home/docker/cp-test_ha-680410-m04_ha-680410.txt         |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410 sudo cat                                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m04_ha-680410.txt                   |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m02:/home/docker/cp-test_ha-680410-m04_ha-680410-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410-m02 sudo cat                            | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m04_ha-680410-m02.txt               |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03:/home/docker/cp-test_ha-680410-m04_ha-680410-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410-m03 sudo cat                            | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m04_ha-680410-m03.txt               |           |         |         |                     |                     |
	| node    | ha-680410 node stop m02 -v=7                                       | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:05 UTC |
	|         | --alsologtostderr                                                  |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 02:58:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 02:58:27.903728   23809 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:58:27.903853   23809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:58:27.903862   23809 out.go:309] Setting ErrFile to fd 2...
	I0115 02:58:27.903866   23809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:58:27.904065   23809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 02:58:27.904637   23809 out.go:303] Setting JSON to false
	I0115 02:58:27.905465   23809 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2453,"bootTime":1705285055,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 02:58:27.905538   23809 start.go:138] virtualization: kvm guest
	I0115 02:58:27.907797   23809 out.go:177] * [ha-680410] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 02:58:27.909278   23809 out.go:177]   - MINIKUBE_LOCATION=17909
	I0115 02:58:27.910743   23809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 02:58:27.909269   23809 notify.go:220] Checking for updates...
	I0115 02:58:27.913534   23809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:58:27.914911   23809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:58:27.916245   23809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 02:58:27.917510   23809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 02:58:27.918788   23809 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 02:58:27.950836   23809 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 02:58:27.952083   23809 start.go:296] selected driver: kvm2
	I0115 02:58:27.952097   23809 start.go:900] validating driver "kvm2" against <nil>
	I0115 02:58:27.952118   23809 start.go:911] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 02:58:27.953037   23809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:58:27.953145   23809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17909-7685/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 02:58:27.965710   23809 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 02:58:27.965761   23809 start_flags.go:308] no existing cluster config was found, will generate one from the flags 
	I0115 02:58:27.965944   23809 start_flags.go:943] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 02:58:27.965996   23809 cni.go:84] Creating CNI manager for ""
	I0115 02:58:27.966009   23809 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0115 02:58:27.966017   23809 start_flags.go:317] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 02:58:27.966064   23809 start.go:339] cluster config:
	{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs:}
	I0115 02:58:27.966152   23809 iso.go:125] acquiring lock: {Name:mk557eda9a6ce643c635b77cd4c9cb212ca64fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:58:27.968579   23809 out.go:177] * Starting "ha-680410" primary control-plane node in "ha-680410" cluster
	I0115 02:58:27.970736   23809 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:58:27.970759   23809 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0115 02:58:27.970765   23809 cache.go:56] Caching tarball of preloaded images
	I0115 02:58:27.970839   23809 preload.go:173] Found /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 02:58:27.970852   23809 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0115 02:58:27.971145   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:58:27.971165   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json: {Name:mk893384b7b0ad5aa2d7ef4824af052fc6525c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:58:27.971296   23809 start.go:360] acquireMachinesLock for ha-680410: {Name:mk08ca2fbfa7e17b9b93de9f109025291dd8cd1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 02:58:27.971328   23809 start.go:364] duration metric: took 16.89µs to acquireMachinesLock for "ha-680410"
	I0115 02:58:27.971349   23809 start.go:93] Provisioning new machine with config: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 02:58:27.971418   23809 start.go:125] createHost starting for "" (driver="kvm2")
	I0115 02:58:27.973140   23809 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0115 02:58:27.973245   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:58:27.973275   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:58:27.985342   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I0115 02:58:27.985713   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:58:27.986229   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:58:27.986247   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:58:27.986535   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:58:27.986685   23809 main.go:141] libmachine: (ha-680410) Calling .GetMachineName
	I0115 02:58:27.986805   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:27.986930   23809 start.go:159] libmachine.API.Create for "ha-680410" (driver="kvm2")
	I0115 02:58:27.986961   23809 client.go:168] LocalClient.Create starting
	I0115 02:58:27.986986   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem
	I0115 02:58:27.987010   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 02:58:27.987024   23809 main.go:141] libmachine: Parsing certificate...
	I0115 02:58:27.987066   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem
	I0115 02:58:27.987083   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 02:58:27.987097   23809 main.go:141] libmachine: Parsing certificate...
	I0115 02:58:27.987114   23809 main.go:141] libmachine: Running pre-create checks...
	I0115 02:58:27.987122   23809 main.go:141] libmachine: (ha-680410) Calling .PreCreateCheck
	I0115 02:58:27.987453   23809 main.go:141] libmachine: (ha-680410) Calling .GetConfigRaw
	I0115 02:58:27.987786   23809 main.go:141] libmachine: Creating machine...
	I0115 02:58:27.987800   23809 main.go:141] libmachine: (ha-680410) Calling .Create
	I0115 02:58:27.987899   23809 main.go:141] libmachine: (ha-680410) Creating KVM machine...
	I0115 02:58:27.989007   23809 main.go:141] libmachine: (ha-680410) DBG | found existing default KVM network
	I0115 02:58:27.989682   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:27.989531   23832 network.go:208] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a20}
	I0115 02:58:27.989719   23809 main.go:141] libmachine: (ha-680410) DBG | created network xml: 
	I0115 02:58:27.989745   23809 main.go:141] libmachine: (ha-680410) DBG | <network>
	I0115 02:58:27.989771   23809 main.go:141] libmachine: (ha-680410) DBG |   <name>mk-ha-680410</name>
	I0115 02:58:27.989797   23809 main.go:141] libmachine: (ha-680410) DBG |   <dns enable='no'/>
	I0115 02:58:27.989825   23809 main.go:141] libmachine: (ha-680410) DBG |   
	I0115 02:58:27.989834   23809 main.go:141] libmachine: (ha-680410) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0115 02:58:27.989840   23809 main.go:141] libmachine: (ha-680410) DBG |     <dhcp>
	I0115 02:58:27.989848   23809 main.go:141] libmachine: (ha-680410) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0115 02:58:27.989866   23809 main.go:141] libmachine: (ha-680410) DBG |     </dhcp>
	I0115 02:58:27.989890   23809 main.go:141] libmachine: (ha-680410) DBG |   </ip>
	I0115 02:58:27.989904   23809 main.go:141] libmachine: (ha-680410) DBG |   
	I0115 02:58:27.989914   23809 main.go:141] libmachine: (ha-680410) DBG | </network>
	I0115 02:58:27.989921   23809 main.go:141] libmachine: (ha-680410) DBG | 
	I0115 02:58:27.994312   23809 main.go:141] libmachine: (ha-680410) DBG | trying to create private KVM network mk-ha-680410 192.168.39.0/24...
	I0115 02:58:28.057701   23809 main.go:141] libmachine: (ha-680410) Setting up store path in /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410 ...
	I0115 02:58:28.057743   23809 main.go:141] libmachine: (ha-680410) Building disk image from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 02:58:28.057781   23809 main.go:141] libmachine: (ha-680410) DBG | private KVM network mk-ha-680410 192.168.39.0/24 created
	I0115 02:58:28.057813   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:28.057611   23832 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:58:28.057871   23809 main.go:141] libmachine: (ha-680410) Downloading /home/jenkins/minikube-integration/17909-7685/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 02:58:28.263960   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:28.263848   23832 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa...
	I0115 02:58:28.419978   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:28.419883   23832 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/ha-680410.rawdisk...
	I0115 02:58:28.420003   23809 main.go:141] libmachine: (ha-680410) DBG | Writing magic tar header
	I0115 02:58:28.420013   23809 main.go:141] libmachine: (ha-680410) DBG | Writing SSH key tar header
	I0115 02:58:28.420021   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:28.419992   23832 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410 ...
	I0115 02:58:28.420134   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410 (perms=drwx------)
	I0115 02:58:28.420154   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines (perms=drwxr-xr-x)
	I0115 02:58:28.420162   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410
	I0115 02:58:28.420172   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines
	I0115 02:58:28.420180   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:58:28.420205   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685
	I0115 02:58:28.420217   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 02:58:28.420232   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube (perms=drwxr-xr-x)
	I0115 02:58:28.420239   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins
	I0115 02:58:28.420265   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685 (perms=drwxrwxr-x)
	I0115 02:58:28.420287   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home
	I0115 02:58:28.420314   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 02:58:28.420330   23809 main.go:141] libmachine: (ha-680410) DBG | Skipping /home - not owner
	I0115 02:58:28.420343   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 02:58:28.420361   23809 main.go:141] libmachine: (ha-680410) Creating domain...
	I0115 02:58:28.421252   23809 main.go:141] libmachine: (ha-680410) define libvirt domain using xml: 
	I0115 02:58:28.421276   23809 main.go:141] libmachine: (ha-680410) <domain type='kvm'>
	I0115 02:58:28.421295   23809 main.go:141] libmachine: (ha-680410)   <name>ha-680410</name>
	I0115 02:58:28.421312   23809 main.go:141] libmachine: (ha-680410)   <memory unit='MiB'>2200</memory>
	I0115 02:58:28.421326   23809 main.go:141] libmachine: (ha-680410)   <vcpu>2</vcpu>
	I0115 02:58:28.421342   23809 main.go:141] libmachine: (ha-680410)   <features>
	I0115 02:58:28.421351   23809 main.go:141] libmachine: (ha-680410)     <acpi/>
	I0115 02:58:28.421358   23809 main.go:141] libmachine: (ha-680410)     <apic/>
	I0115 02:58:28.421365   23809 main.go:141] libmachine: (ha-680410)     <pae/>
	I0115 02:58:28.421371   23809 main.go:141] libmachine: (ha-680410)     
	I0115 02:58:28.421377   23809 main.go:141] libmachine: (ha-680410)   </features>
	I0115 02:58:28.421392   23809 main.go:141] libmachine: (ha-680410)   <cpu mode='host-passthrough'>
	I0115 02:58:28.421399   23809 main.go:141] libmachine: (ha-680410)   
	I0115 02:58:28.421404   23809 main.go:141] libmachine: (ha-680410)   </cpu>
	I0115 02:58:28.421410   23809 main.go:141] libmachine: (ha-680410)   <os>
	I0115 02:58:28.421415   23809 main.go:141] libmachine: (ha-680410)     <type>hvm</type>
	I0115 02:58:28.421421   23809 main.go:141] libmachine: (ha-680410)     <boot dev='cdrom'/>
	I0115 02:58:28.421429   23809 main.go:141] libmachine: (ha-680410)     <boot dev='hd'/>
	I0115 02:58:28.421436   23809 main.go:141] libmachine: (ha-680410)     <bootmenu enable='no'/>
	I0115 02:58:28.421443   23809 main.go:141] libmachine: (ha-680410)   </os>
	I0115 02:58:28.421448   23809 main.go:141] libmachine: (ha-680410)   <devices>
	I0115 02:58:28.421456   23809 main.go:141] libmachine: (ha-680410)     <disk type='file' device='cdrom'>
	I0115 02:58:28.421466   23809 main.go:141] libmachine: (ha-680410)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/boot2docker.iso'/>
	I0115 02:58:28.421475   23809 main.go:141] libmachine: (ha-680410)       <target dev='hdc' bus='scsi'/>
	I0115 02:58:28.421496   23809 main.go:141] libmachine: (ha-680410)       <readonly/>
	I0115 02:58:28.421514   23809 main.go:141] libmachine: (ha-680410)     </disk>
	I0115 02:58:28.421534   23809 main.go:141] libmachine: (ha-680410)     <disk type='file' device='disk'>
	I0115 02:58:28.421549   23809 main.go:141] libmachine: (ha-680410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 02:58:28.421569   23809 main.go:141] libmachine: (ha-680410)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/ha-680410.rawdisk'/>
	I0115 02:58:28.421583   23809 main.go:141] libmachine: (ha-680410)       <target dev='hda' bus='virtio'/>
	I0115 02:58:28.421595   23809 main.go:141] libmachine: (ha-680410)     </disk>
	I0115 02:58:28.421610   23809 main.go:141] libmachine: (ha-680410)     <interface type='network'>
	I0115 02:58:28.421623   23809 main.go:141] libmachine: (ha-680410)       <source network='mk-ha-680410'/>
	I0115 02:58:28.421637   23809 main.go:141] libmachine: (ha-680410)       <model type='virtio'/>
	I0115 02:58:28.421650   23809 main.go:141] libmachine: (ha-680410)     </interface>
	I0115 02:58:28.421664   23809 main.go:141] libmachine: (ha-680410)     <interface type='network'>
	I0115 02:58:28.421682   23809 main.go:141] libmachine: (ha-680410)       <source network='default'/>
	I0115 02:58:28.421699   23809 main.go:141] libmachine: (ha-680410)       <model type='virtio'/>
	I0115 02:58:28.421711   23809 main.go:141] libmachine: (ha-680410)     </interface>
	I0115 02:58:28.421722   23809 main.go:141] libmachine: (ha-680410)     <serial type='pty'>
	I0115 02:58:28.421733   23809 main.go:141] libmachine: (ha-680410)       <target port='0'/>
	I0115 02:58:28.421744   23809 main.go:141] libmachine: (ha-680410)     </serial>
	I0115 02:58:28.421761   23809 main.go:141] libmachine: (ha-680410)     <console type='pty'>
	I0115 02:58:28.421775   23809 main.go:141] libmachine: (ha-680410)       <target type='serial' port='0'/>
	I0115 02:58:28.421789   23809 main.go:141] libmachine: (ha-680410)     </console>
	I0115 02:58:28.421803   23809 main.go:141] libmachine: (ha-680410)     <rng model='virtio'>
	I0115 02:58:28.421825   23809 main.go:141] libmachine: (ha-680410)       <backend model='random'>/dev/random</backend>
	I0115 02:58:28.421838   23809 main.go:141] libmachine: (ha-680410)     </rng>
	I0115 02:58:28.421851   23809 main.go:141] libmachine: (ha-680410)     
	I0115 02:58:28.421862   23809 main.go:141] libmachine: (ha-680410)     
	I0115 02:58:28.421875   23809 main.go:141] libmachine: (ha-680410)   </devices>
	I0115 02:58:28.421886   23809 main.go:141] libmachine: (ha-680410) </domain>
	I0115 02:58:28.421901   23809 main.go:141] libmachine: (ha-680410) 
	I0115 02:58:28.425805   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:29:85:68 in network default
	I0115 02:58:28.426327   23809 main.go:141] libmachine: (ha-680410) Ensuring networks are active...
	I0115 02:58:28.426358   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:28.426841   23809 main.go:141] libmachine: (ha-680410) Ensuring network default is active
	I0115 02:58:28.427100   23809 main.go:141] libmachine: (ha-680410) Ensuring network mk-ha-680410 is active
	I0115 02:58:28.427590   23809 main.go:141] libmachine: (ha-680410) Getting domain xml...
	I0115 02:58:28.428162   23809 main.go:141] libmachine: (ha-680410) Creating domain...
	I0115 02:58:29.564499   23809 main.go:141] libmachine: (ha-680410) Waiting to get IP...
	I0115 02:58:29.565396   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:29.565783   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:29.565840   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:29.565782   23832 retry.go:31] will retry after 240.639484ms: waiting for machine to come up
	I0115 02:58:29.808229   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:29.808674   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:29.808722   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:29.808632   23832 retry.go:31] will retry after 383.501823ms: waiting for machine to come up
	I0115 02:58:30.195323   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:30.195727   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:30.195759   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:30.195676   23832 retry.go:31] will retry after 453.282979ms: waiting for machine to come up
	I0115 02:58:30.650179   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:30.650633   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:30.650661   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:30.650597   23832 retry.go:31] will retry after 509.075269ms: waiting for machine to come up
	I0115 02:58:31.161065   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:31.161443   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:31.161472   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:31.161401   23832 retry.go:31] will retry after 471.62185ms: waiting for machine to come up
	I0115 02:58:31.634969   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:31.635370   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:31.635417   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:31.635320   23832 retry.go:31] will retry after 647.582826ms: waiting for machine to come up
	I0115 02:58:32.283989   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:32.284354   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:32.284383   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:32.284302   23832 retry.go:31] will retry after 993.298534ms: waiting for machine to come up
	I0115 02:58:33.278728   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:33.279095   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:33.279123   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:33.279051   23832 retry.go:31] will retry after 1.081585318s: waiting for machine to come up
	I0115 02:58:34.362107   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:34.362505   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:34.362535   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:34.362466   23832 retry.go:31] will retry after 1.251610896s: waiting for machine to come up
	I0115 02:58:35.615925   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:35.616437   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:35.616469   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:35.616376   23832 retry.go:31] will retry after 1.802852546s: waiting for machine to come up
	I0115 02:58:37.420309   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:37.420833   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:37.420865   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:37.420783   23832 retry.go:31] will retry after 2.055276332s: waiting for machine to come up
	I0115 02:58:39.477437   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:39.477858   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:39.477886   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:39.477799   23832 retry.go:31] will retry after 3.431189295s: waiting for machine to come up
	I0115 02:58:42.913263   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:42.913755   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:42.913804   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:42.913729   23832 retry.go:31] will retry after 4.071377514s: waiting for machine to come up
	I0115 02:58:46.988351   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:46.988687   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:46.988707   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:46.988650   23832 retry.go:31] will retry after 4.734714935s: waiting for machine to come up
	I0115 02:58:51.727284   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.727690   23809 main.go:141] libmachine: (ha-680410) Found IP for machine: 192.168.39.194
	I0115 02:58:51.727720   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has current primary IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.727733   23809 main.go:141] libmachine: (ha-680410) Reserving static IP address...
	I0115 02:58:51.728095   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find host DHCP lease matching {name: "ha-680410", mac: "52:54:00:f3:e1:70", ip: "192.168.39.194"} in network mk-ha-680410
	I0115 02:58:51.795648   23809 main.go:141] libmachine: (ha-680410) DBG | Getting to WaitForSSH function...
	I0115 02:58:51.795685   23809 main.go:141] libmachine: (ha-680410) Reserved static IP address: 192.168.39.194
	I0115 02:58:51.795700   23809 main.go:141] libmachine: (ha-680410) Waiting for SSH to be available...
	I0115 02:58:51.797888   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.798223   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:51.798244   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.798415   23809 main.go:141] libmachine: (ha-680410) DBG | Using SSH client type: external
	I0115 02:58:51.798440   23809 main.go:141] libmachine: (ha-680410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa (-rw-------)
	I0115 02:58:51.798509   23809 main.go:141] libmachine: (ha-680410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 02:58:51.798539   23809 main.go:141] libmachine: (ha-680410) DBG | About to run SSH command:
	I0115 02:58:51.798557   23809 main.go:141] libmachine: (ha-680410) DBG | exit 0
	I0115 02:58:51.886688   23809 main.go:141] libmachine: (ha-680410) DBG | SSH cmd err, output: <nil>: 
	I0115 02:58:51.886893   23809 main.go:141] libmachine: (ha-680410) KVM machine creation complete!
	I0115 02:58:51.887170   23809 main.go:141] libmachine: (ha-680410) Calling .GetConfigRaw
	I0115 02:58:51.887678   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:51.887860   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:51.888002   23809 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 02:58:51.888019   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:58:51.889224   23809 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 02:58:51.889244   23809 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 02:58:51.889277   23809 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 02:58:51.889294   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:51.891740   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.892129   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:51.892159   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.892267   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:51.892468   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:51.892624   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:51.892767   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:51.892944   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:51.893290   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:51.893304   23809 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 02:58:52.010186   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:58:52.010211   23809 main.go:141] libmachine: Detecting the provisioner...
	I0115 02:58:52.010231   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.012537   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.012875   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.012901   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.013018   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.013194   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.013340   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.013474   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.013599   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:52.013945   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:52.013959   23809 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 02:58:52.127568   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 02:58:52.127653   23809 main.go:141] libmachine: found compatible host: buildroot
	I0115 02:58:52.127669   23809 main.go:141] libmachine: Provisioning with buildroot...
	I0115 02:58:52.127683   23809 main.go:141] libmachine: (ha-680410) Calling .GetMachineName
	I0115 02:58:52.127940   23809 buildroot.go:166] provisioning hostname "ha-680410"
	I0115 02:58:52.127964   23809 main.go:141] libmachine: (ha-680410) Calling .GetMachineName
	I0115 02:58:52.128136   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.130729   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.131034   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.131056   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.131207   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.131373   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.131531   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.131679   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.131805   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:52.132120   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:52.132134   23809 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-680410 && echo "ha-680410" | sudo tee /etc/hostname
	I0115 02:58:52.258746   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-680410
	
	I0115 02:58:52.258786   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.261304   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.261689   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.261719   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.261859   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.262016   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.262172   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.262272   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.262456   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:52.262808   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:52.262828   23809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-680410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-680410/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-680410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 02:58:52.387103   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:58:52.387133   23809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17909-7685/.minikube CaCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17909-7685/.minikube}
	I0115 02:58:52.387167   23809 buildroot.go:174] setting up certificates
	I0115 02:58:52.387177   23809 provision.go:84] configureAuth start
	I0115 02:58:52.387186   23809 main.go:141] libmachine: (ha-680410) Calling .GetMachineName
	I0115 02:58:52.387439   23809 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 02:58:52.389861   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.390181   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.390212   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.390338   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.392342   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.392634   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.392662   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.392793   23809 provision.go:143] copyHostCerts
	I0115 02:58:52.392835   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 02:58:52.392875   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem, removing ...
	I0115 02:58:52.392895   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 02:58:52.392983   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem (1078 bytes)
	I0115 02:58:52.393068   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 02:58:52.393090   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem, removing ...
	I0115 02:58:52.393099   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 02:58:52.393133   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem (1123 bytes)
	I0115 02:58:52.393185   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 02:58:52.393206   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem, removing ...
	I0115 02:58:52.393216   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 02:58:52.393255   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem (1679 bytes)
	I0115 02:58:52.393368   23809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem org=jenkins.ha-680410 san=[127.0.0.1 192.168.39.194 ha-680410 localhost minikube]
	I0115 02:58:52.587892   23809 provision.go:177] copyRemoteCerts
	I0115 02:58:52.587948   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 02:58:52.587976   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.590227   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.590474   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.590522   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.590640   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.590820   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.590974   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.591112   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:58:52.675339   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 02:58:52.675407   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 02:58:52.697515   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 02:58:52.697571   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0115 02:58:52.719673   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 02:58:52.719717   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 02:58:52.741049   23809 provision.go:87] duration metric: took 353.863276ms to configureAuth
	I0115 02:58:52.741067   23809 buildroot.go:189] setting minikube options for container-runtime
	I0115 02:58:52.741254   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:58:52.741281   23809 main.go:141] libmachine: Checking connection to Docker...
	I0115 02:58:52.741293   23809 main.go:141] libmachine: (ha-680410) Calling .GetURL
	I0115 02:58:52.742266   23809 main.go:141] libmachine: (ha-680410) DBG | Using libvirt version 6000000
	I0115 02:58:52.744486   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.744830   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.744857   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.745018   23809 main.go:141] libmachine: Docker is up and running!
	I0115 02:58:52.745034   23809 main.go:141] libmachine: Reticulating splines...
	I0115 02:58:52.745042   23809 client.go:171] duration metric: took 24.758071499s to LocalClient.Create
	I0115 02:58:52.745068   23809 start.go:167] duration metric: took 24.758138882s to libmachine.API.Create "ha-680410"
	I0115 02:58:52.745091   23809 start.go:293] postStartSetup for "ha-680410" (driver="kvm2")
	I0115 02:58:52.745107   23809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 02:58:52.745128   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.745346   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 02:58:52.745382   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.747454   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.747763   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.747784   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.747911   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.748086   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.748206   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.748354   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:58:52.835320   23809 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 02:58:52.839327   23809 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 02:58:52.839349   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/addons for local assets ...
	I0115 02:58:52.839418   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/files for local assets ...
	I0115 02:58:52.839517   23809 filesync.go:149] local asset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> 149542.pem in /etc/ssl/certs
	I0115 02:58:52.839529   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /etc/ssl/certs/149542.pem
	I0115 02:58:52.839648   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 02:58:52.847179   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /etc/ssl/certs/149542.pem (1708 bytes)
	I0115 02:58:52.868606   23809 start.go:296] duration metric: took 123.502219ms for postStartSetup
	I0115 02:58:52.868645   23809 main.go:141] libmachine: (ha-680410) Calling .GetConfigRaw
	I0115 02:58:52.869146   23809 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 02:58:52.871436   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.871764   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.871791   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.872024   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:58:52.872175   23809 start.go:128] duration metric: took 24.900747472s to createHost
	I0115 02:58:52.872194   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.874389   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.874677   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.874702   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.874834   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.874996   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.875128   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.875265   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.875430   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:52.875852   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:52.875868   23809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 02:58:52.991518   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705287532.963773010
	
	I0115 02:58:52.991538   23809 fix.go:216] guest clock: 1705287532.963773010
	I0115 02:58:52.991548   23809 fix.go:229] Guest: 2024-01-15 02:58:52.96377301 +0000 UTC Remote: 2024-01-15 02:58:52.872185068 +0000 UTC m=+25.015719209 (delta=91.587942ms)
	I0115 02:58:52.991575   23809 fix.go:200] guest clock delta is within tolerance: 91.587942ms
	I0115 02:58:52.991582   23809 start.go:83] releasing machines lock for "ha-680410", held for 25.02024292s
	I0115 02:58:52.991603   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.991821   23809 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 02:58:52.993928   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.994236   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.994264   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.994392   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.994803   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.994936   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.995046   23809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 02:58:52.995083   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.995145   23809 ssh_runner.go:195] Run: cat /version.json
	I0115 02:58:52.995169   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.997819   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.997846   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.998112   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.998141   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.998167   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.998191   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.998280   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.998384   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.998454   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.998506   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.998600   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.998659   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.998764   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:58:52.998798   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:58:53.106328   23809 ssh_runner.go:195] Run: systemctl --version
	I0115 02:58:53.111741   23809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 02:58:53.117367   23809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 02:58:53.117417   23809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 02:58:53.131855   23809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 02:58:53.131870   23809 start.go:494] detecting cgroup driver to use...
	I0115 02:58:53.131912   23809 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 02:58:53.164602   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 02:58:53.176236   23809 docker.go:217] disabling cri-docker service (if available) ...
	I0115 02:58:53.176289   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 02:58:53.187346   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 02:58:53.198293   23809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 02:58:53.295889   23809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 02:58:53.413165   23809 docker.go:233] disabling docker service ...
	I0115 02:58:53.413227   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 02:58:53.426285   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 02:58:53.436501   23809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 02:58:53.545675   23809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 02:58:53.653772   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 02:58:53.665847   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 02:58:53.682550   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 02:58:53.691204   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 02:58:53.699943   23809 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 02:58:53.699986   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 02:58:53.708535   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:58:53.717167   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 02:58:53.725624   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:58:53.734453   23809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 02:58:53.743425   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 02:58:53.752227   23809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 02:58:53.760003   23809 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 02:58:53.760053   23809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 02:58:53.771991   23809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 02:58:53.779814   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:58:53.884216   23809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 02:58:53.914477   23809 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 02:58:53.914539   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:58:53.918700   23809 retry.go:31] will retry after 951.496472ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0115 02:58:54.870838   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:58:54.876153   23809 start.go:562] Will wait 60s for crictl version
	I0115 02:58:54.876202   23809 ssh_runner.go:195] Run: which crictl
	I0115 02:58:54.879728   23809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 02:58:54.919213   23809 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0115 02:58:54.919276   23809 ssh_runner.go:195] Run: containerd --version
	I0115 02:58:54.947428   23809 ssh_runner.go:195] Run: containerd --version
	I0115 02:58:54.976417   23809 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0115 02:58:54.977575   23809 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 02:58:54.980102   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:54.980468   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:54.980493   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:54.980638   23809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 02:58:54.984434   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 02:58:54.996841   23809 kubeadm.go:877] updating cluster {Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} ...
	I0115 02:58:54.996930   23809 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:58:54.996966   23809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:58:55.034588   23809 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 02:58:55.034639   23809 ssh_runner.go:195] Run: which lz4
	I0115 02:58:55.038287   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0115 02:58:55.038356   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 02:58:55.042367   23809 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 02:58:55.042397   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0115 02:58:56.707744   23809 containerd.go:548] duration metric: took 1.669411813s to copy over tarball
	I0115 02:58:56.707808   23809 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 02:58:59.439268   23809 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.731440064s)
	I0115 02:58:59.439294   23809 containerd.go:555] duration metric: took 2.731530096s to extract the tarball
	I0115 02:58:59.439301   23809 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 02:58:59.478956   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:58:59.584118   23809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 02:58:59.611585   23809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:58:59.645661   23809 retry.go:31] will retry after 357.409654ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T02:58:59Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0115 02:59:00.003194   23809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:59:00.048479   23809 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 02:59:00.048500   23809 cache_images.go:84] Images are preloaded, skipping loading
	I0115 02:59:00.048508   23809 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.28.4 containerd true true} ...
	I0115 02:59:00.048650   23809 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-680410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0115 02:59:00.048713   23809 ssh_runner.go:195] Run: sudo crictl info
	I0115 02:59:00.082962   23809 cni.go:84] Creating CNI manager for ""
	I0115 02:59:00.082987   23809 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0115 02:59:00.083000   23809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0115 02:59:00.083023   23809 kubeadm.go:180] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-680410 NodeName:ha-680410 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 02:59:00.083177   23809 kubeadm.go:186] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-680410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 02:59:00.083206   23809 kube-vip.go:101] generating kube-vip config ...
	I0115 02:59:00.083281   23809 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_ddns
	      value: "false"
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.6.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0115 02:59:00.083339   23809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 02:59:00.092724   23809 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 02:59:00.092784   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0115 02:59:00.101930   23809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0115 02:59:00.117120   23809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 02:59:00.132785   23809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0115 02:59:00.148459   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1265 bytes)
	I0115 02:59:00.163517   23809 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0115 02:59:00.167017   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 02:59:00.177933   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:59:00.276490   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 02:59:00.292964   23809 certs.go:68] Setting up /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410 for IP: 192.168.39.194
	I0115 02:59:00.292980   23809 certs.go:194] generating shared ca certs ...
	I0115 02:59:00.293001   23809 certs.go:226] acquiring lock for ca certs: {Name:mk4b44e68f01694cff17056fe1b88a9d17c4d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:00.293135   23809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key
	I0115 02:59:00.293181   23809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key
	I0115 02:59:00.293192   23809 certs.go:256] generating profile certs ...
	I0115 02:59:00.293249   23809 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key
	I0115 02:59:00.293261   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt with IP's: []
	I0115 02:59:00.989226   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt ...
	I0115 02:59:00.989258   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt: {Name:mkf0142a7c21ef12ae6ae6373ad6ebe719ca4b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:00.989437   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key ...
	I0115 02:59:00.989450   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key: {Name:mk6018755014a1632c637089ca5c3e252e5f2d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:00.989547   23809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.5182c3b8
	I0115 02:59:00.989563   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.5182c3b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194 192.168.39.254]
	I0115 02:59:01.147530   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.5182c3b8 ...
	I0115 02:59:01.147557   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.5182c3b8: {Name:mk1eac799ad83c47e55ca98d5f5e7de325eb259b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:01.147736   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.5182c3b8 ...
	I0115 02:59:01.147758   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.5182c3b8: {Name:mk7cca06559f993cf6cde82356f22c160f4172a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:01.147854   23809 certs.go:381] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.5182c3b8 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt
	I0115 02:59:01.147942   23809 certs.go:385] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.5182c3b8 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key
	I0115 02:59:01.148000   23809 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key
	I0115 02:59:01.148015   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt with IP's: []
	I0115 02:59:01.206142   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt ...
	I0115 02:59:01.206164   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt: {Name:mk6103e92ab2e2ce044b2163a740fbdd519b44b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:01.206312   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key ...
	I0115 02:59:01.206326   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key: {Name:mk9a523ac6d589c235e113f9b3edd6c22e1cdaf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:01.206412   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 02:59:01.206429   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 02:59:01.206439   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 02:59:01.206452   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 02:59:01.206464   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 02:59:01.206477   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 02:59:01.206490   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 02:59:01.206503   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 02:59:01.206555   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem (1338 bytes)
	W0115 02:59:01.206586   23809 certs.go:480] ignoring /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954_empty.pem, impossibly tiny 0 bytes
	I0115 02:59:01.206595   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 02:59:01.206615   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem (1078 bytes)
	I0115 02:59:01.206636   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem (1123 bytes)
	I0115 02:59:01.206658   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem (1679 bytes)
	I0115 02:59:01.206695   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem (1708 bytes)
	I0115 02:59:01.206724   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /usr/share/ca-certificates/149542.pem
	I0115 02:59:01.206738   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:01.206755   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem -> /usr/share/ca-certificates/14954.pem
	I0115 02:59:01.207238   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 02:59:01.238872   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 02:59:01.266439   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 02:59:01.290886   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 02:59:01.321567   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0115 02:59:01.343248   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 02:59:01.365280   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 02:59:01.387437   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 02:59:01.409614   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /usr/share/ca-certificates/149542.pem (1708 bytes)
	I0115 02:59:01.431539   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 02:59:01.453400   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem --> /usr/share/ca-certificates/14954.pem (1338 bytes)
	I0115 02:59:01.475185   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 02:59:01.490668   23809 ssh_runner.go:195] Run: openssl version
	I0115 02:59:01.496688   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14954.pem && ln -fs /usr/share/ca-certificates/14954.pem /etc/ssl/certs/14954.pem"
	I0115 02:59:01.506153   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14954.pem
	I0115 02:59:01.510590   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:54 /usr/share/ca-certificates/14954.pem
	I0115 02:59:01.510637   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14954.pem
	I0115 02:59:01.515925   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14954.pem /etc/ssl/certs/51391683.0"
	I0115 02:59:01.525100   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149542.pem && ln -fs /usr/share/ca-certificates/149542.pem /etc/ssl/certs/149542.pem"
	I0115 02:59:01.534256   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149542.pem
	I0115 02:59:01.538705   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 15 02:54 /usr/share/ca-certificates/149542.pem
	I0115 02:59:01.538754   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149542.pem
	I0115 02:59:01.544176   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149542.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 02:59:01.553892   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 02:59:01.563382   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:01.567918   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 15 02:46 /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:01.567968   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:01.573285   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 02:59:01.582656   23809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0115 02:59:01.586737   23809 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0115 02:59:01.586787   23809 kubeadm.go:391] StartCluster: {Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 02:59:01.586846   23809 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 02:59:01.586877   23809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 02:59:01.625249   23809 cri.go:89] found id: ""
	I0115 02:59:01.625314   23809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 02:59:01.633920   23809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 02:59:01.642245   23809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 02:59:01.650670   23809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 02:59:01.650683   23809 kubeadm.go:156] found existing configuration files:
	
	I0115 02:59:01.650719   23809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0115 02:59:01.658390   23809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0115 02:59:01.658436   23809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0115 02:59:01.667471   23809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0115 02:59:01.674852   23809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0115 02:59:01.674893   23809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0115 02:59:01.683808   23809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0115 02:59:01.691248   23809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0115 02:59:01.691295   23809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0115 02:59:01.698852   23809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0115 02:59:01.705993   23809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0115 02:59:01.706024   23809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0115 02:59:01.713540   23809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0115 02:59:01.824792   23809 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0115 02:59:01.824894   23809 kubeadm.go:309] [preflight] Running pre-flight checks
	I0115 02:59:01.971708   23809 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 02:59:01.971825   23809 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 02:59:01.971950   23809 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 02:59:02.196836   23809 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 02:59:02.198809   23809 out.go:204]   - Generating certificates and keys ...
	I0115 02:59:02.198911   23809 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0115 02:59:02.198999   23809 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0115 02:59:02.303288   23809 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 02:59:02.464084   23809 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0115 02:59:02.706555   23809 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0115 02:59:02.803711   23809 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0115 02:59:02.953146   23809 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0115 02:59:02.953437   23809 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-680410 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0115 02:59:03.162158   23809 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0115 02:59:03.162295   23809 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-680410 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0115 02:59:03.289721   23809 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 02:59:03.466079   23809 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 02:59:03.557828   23809 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0115 02:59:03.558088   23809 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 02:59:04.008340   23809 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 02:59:04.135617   23809 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 02:59:04.197203   23809 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 02:59:04.275573   23809 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 02:59:04.277129   23809 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 02:59:04.280751   23809 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 02:59:04.282694   23809 out.go:204]   - Booting up control plane ...
	I0115 02:59:04.282785   23809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 02:59:04.282887   23809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 02:59:04.282974   23809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 02:59:04.297335   23809 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 02:59:04.298193   23809 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 02:59:04.298321   23809 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0115 02:59:04.410805   23809 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 02:59:13.988169   23809 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.582154 seconds
	I0115 02:59:13.988424   23809 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 02:59:14.008285   23809 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 02:59:14.539012   23809 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 02:59:14.539278   23809 kubeadm.go:309] [mark-control-plane] Marking the node ha-680410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 02:59:15.054941   23809 kubeadm.go:309] [bootstrap-token] Using token: uo86kr.pjq7c4l94qhdmxio
	I0115 02:59:15.056344   23809 out.go:204]   - Configuring RBAC rules ...
	I0115 02:59:15.056451   23809 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 02:59:15.062713   23809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 02:59:15.079045   23809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 02:59:15.081978   23809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 02:59:15.085715   23809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 02:59:15.088528   23809 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 02:59:15.102910   23809 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 02:59:15.302861   23809 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0115 02:59:15.468650   23809 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0115 02:59:15.468675   23809 kubeadm.go:309] 
	I0115 02:59:15.468730   23809 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0115 02:59:15.468736   23809 kubeadm.go:309] 
	I0115 02:59:15.468829   23809 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0115 02:59:15.468865   23809 kubeadm.go:309] 
	I0115 02:59:15.468912   23809 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0115 02:59:15.468991   23809 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 02:59:15.469080   23809 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 02:59:15.469094   23809 kubeadm.go:309] 
	I0115 02:59:15.469160   23809 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0115 02:59:15.469174   23809 kubeadm.go:309] 
	I0115 02:59:15.469251   23809 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 02:59:15.469261   23809 kubeadm.go:309] 
	I0115 02:59:15.469328   23809 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0115 02:59:15.469433   23809 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 02:59:15.469517   23809 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 02:59:15.469534   23809 kubeadm.go:309] 
	I0115 02:59:15.469642   23809 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 02:59:15.469758   23809 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0115 02:59:15.469769   23809 kubeadm.go:309] 
	I0115 02:59:15.469893   23809 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uo86kr.pjq7c4l94qhdmxio \
	I0115 02:59:15.470052   23809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 \
	I0115 02:59:15.470082   23809 kubeadm.go:309] 	--control-plane 
	I0115 02:59:15.470091   23809 kubeadm.go:309] 
	I0115 02:59:15.470218   23809 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0115 02:59:15.470231   23809 kubeadm.go:309] 
	I0115 02:59:15.470330   23809 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uo86kr.pjq7c4l94qhdmxio \
	I0115 02:59:15.470489   23809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 
	I0115 02:59:15.470887   23809 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 02:59:15.470947   23809 cni.go:84] Creating CNI manager for ""
	I0115 02:59:15.470960   23809 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0115 02:59:15.472691   23809 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 02:59:15.473991   23809 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 02:59:15.479136   23809 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 02:59:15.479149   23809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 02:59:15.512184   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 02:59:16.514348   23809 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.002112589s)
	I0115 02:59:16.514399   23809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 02:59:16.514482   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:16.514512   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-680410 minikube.k8s.io/updated_at=2024_01_15T02_59_16_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b minikube.k8s.io/name=ha-680410 minikube.k8s.io/primary=true
	I0115 02:59:16.590916   23809 ops.go:34] apiserver oom_adj: -16
	I0115 02:59:16.756085   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:17.256957   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:17.756509   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:18.256482   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:18.756165   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:19.256475   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:19.756574   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:20.256193   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:20.756230   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:21.256483   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:21.756473   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:22.256193   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:22.757079   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:23.256928   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:23.756584   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:24.256999   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:24.757170   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:24.854518   23809 kubeadm.go:1106] duration metric: took 8.340099407s to wait for elevateKubeSystemPrivileges
	W0115 02:59:24.854556   23809 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0115 02:59:24.854563   23809 kubeadm.go:393] duration metric: took 23.267779901s to StartCluster
	I0115 02:59:24.854584   23809 settings.go:142] acquiring lock: {Name:mk9dadd460779833544b9ee743c73944f5d142f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:24.854668   23809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:59:24.855287   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/kubeconfig: {Name:mkf5d0331212c9d6c1cc4e6eb80eacb35f40ffa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:24.855525   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 02:59:24.855542   23809 start.go:232] HA cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 02:59:24.855565   23809 start.go:240] waiting for startup goroutines ...
	I0115 02:59:24.855572   23809 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 02:59:24.855624   23809 addons.go:69] Setting storage-provisioner=true in profile "ha-680410"
	I0115 02:59:24.855649   23809 addons.go:69] Setting default-storageclass=true in profile "ha-680410"
	I0115 02:59:24.855708   23809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-680410"
	I0115 02:59:24.855720   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:59:24.855654   23809 addons.go:234] Setting addon storage-provisioner=true in "ha-680410"
	I0115 02:59:24.855782   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 02:59:24.856101   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.856128   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.856201   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.856239   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.870155   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42123
	I0115 02:59:24.870170   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0115 02:59:24.870557   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.870603   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.870987   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.871011   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.871102   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.871122   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.871343   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.871400   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.871514   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:59:24.871808   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.871836   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.873335   23809 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:59:24.873526   23809 kapi.go:59] client config for ha-680410: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key", CAFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 02:59:24.874066   23809 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 02:59:24.874167   23809 addons.go:234] Setting addon default-storageclass=true in "ha-680410"
	I0115 02:59:24.874201   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 02:59:24.874488   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.874514   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.886120   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33195
	I0115 02:59:24.886552   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.886985   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.887004   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.887376   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.887547   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:59:24.887685   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0115 02:59:24.888027   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.888601   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.888617   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.888943   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.889096   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:59:24.891145   23809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 02:59:24.889384   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.892488   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.892565   23809 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 02:59:24.892582   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 02:59:24.892601   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:59:24.895268   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:24.895714   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:59:24.895746   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:24.895909   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:59:24.896056   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:59:24.896180   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:59:24.896328   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:59:24.907116   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0115 02:59:24.907455   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.907852   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.907870   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.908222   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.908394   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:59:24.909896   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:59:24.910119   23809 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 02:59:24.910137   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 02:59:24.910148   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:59:24.912454   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:24.912844   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:59:24.912871   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:24.913008   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:59:24.913147   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:59:24.913251   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:59:24.913374   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:59:25.028130   23809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 02:59:25.031629   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 02:59:25.061298   23809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 02:59:26.301601   23809 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.269937964s)
	I0115 02:59:26.301653   23809 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0115 02:59:26.301706   23809 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.240373348s)
	I0115 02:59:26.301755   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.301770   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.301790   23809 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273613504s)
	I0115 02:59:26.301824   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.301840   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.302099   23809 main.go:141] libmachine: (ha-680410) DBG | Closing plugin on server side
	I0115 02:59:26.302125   23809 main.go:141] libmachine: (ha-680410) DBG | Closing plugin on server side
	I0115 02:59:26.302134   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.302152   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.302171   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.302192   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.302251   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.302266   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.302280   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.302289   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.302384   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.302404   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.302421   23809 main.go:141] libmachine: (ha-680410) DBG | Closing plugin on server side
	I0115 02:59:26.302577   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.302630   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.302586   23809 main.go:141] libmachine: (ha-680410) DBG | Closing plugin on server side
	I0115 02:59:26.302750   23809 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0115 02:59:26.302769   23809 round_trippers.go:469] Request Headers:
	I0115 02:59:26.302780   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 02:59:26.302792   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 02:59:26.317368   23809 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0115 02:59:26.318073   23809 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0115 02:59:26.318089   23809 round_trippers.go:469] Request Headers:
	I0115 02:59:26.318101   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 02:59:26.318114   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 02:59:26.318126   23809 round_trippers.go:473]     Content-Type: application/json
	I0115 02:59:26.320646   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 02:59:26.320786   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.320802   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.321053   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.321071   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.322811   23809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0115 02:59:26.324145   23809 addons.go:505] duration metric: took 1.468567691s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0115 02:59:26.324191   23809 start.go:245] waiting for cluster config update ...
	I0115 02:59:26.324209   23809 start.go:254] writing updated cluster config ...
	I0115 02:59:26.325931   23809 out.go:177] 
	I0115 02:59:26.327432   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:59:26.327499   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:59:26.329325   23809 out.go:177] * Starting "ha-680410-m02" control-plane node in "ha-680410" cluster
	I0115 02:59:26.330656   23809 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:59:26.330674   23809 cache.go:56] Caching tarball of preloaded images
	I0115 02:59:26.330746   23809 preload.go:173] Found /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 02:59:26.330756   23809 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0115 02:59:26.330813   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:59:26.330976   23809 start.go:360] acquireMachinesLock for ha-680410-m02: {Name:mk08ca2fbfa7e17b9b93de9f109025291dd8cd1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 02:59:26.331031   23809 start.go:364] duration metric: took 33.141µs to acquireMachinesLock for "ha-680410-m02"
	I0115 02:59:26.331051   23809 start.go:93] Provisioning new machine with config: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tru
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 02:59:26.331140   23809 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0115 02:59:26.332874   23809 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0115 02:59:26.332942   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:26.332963   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:26.346540   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0115 02:59:26.346945   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:26.347530   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:26.347557   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:26.347844   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:26.348018   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetMachineName
	I0115 02:59:26.348161   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:26.348303   23809 start.go:159] libmachine.API.Create for "ha-680410" (driver="kvm2")
	I0115 02:59:26.348325   23809 client.go:168] LocalClient.Create starting
	I0115 02:59:26.348354   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem
	I0115 02:59:26.348383   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 02:59:26.348396   23809 main.go:141] libmachine: Parsing certificate...
	I0115 02:59:26.348443   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem
	I0115 02:59:26.348461   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 02:59:26.348472   23809 main.go:141] libmachine: Parsing certificate...
	I0115 02:59:26.348485   23809 main.go:141] libmachine: Running pre-create checks...
	I0115 02:59:26.348493   23809 main.go:141] libmachine: (ha-680410-m02) Calling .PreCreateCheck
	I0115 02:59:26.348642   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetConfigRaw
	I0115 02:59:26.349088   23809 main.go:141] libmachine: Creating machine...
	I0115 02:59:26.349110   23809 main.go:141] libmachine: (ha-680410-m02) Calling .Create
	I0115 02:59:26.349238   23809 main.go:141] libmachine: (ha-680410-m02) Creating KVM machine...
	I0115 02:59:26.350365   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found existing default KVM network
	I0115 02:59:26.350494   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found existing private KVM network mk-ha-680410
	I0115 02:59:26.350612   23809 main.go:141] libmachine: (ha-680410-m02) Setting up store path in /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02 ...
	I0115 02:59:26.350643   23809 main.go:141] libmachine: (ha-680410-m02) Building disk image from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 02:59:26.350696   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:26.350594   24145 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:59:26.350806   23809 main.go:141] libmachine: (ha-680410-m02) Downloading /home/jenkins/minikube-integration/17909-7685/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 02:59:26.550923   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:26.550773   24145 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa...
	I0115 02:59:26.682150   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:26.682041   24145 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/ha-680410-m02.rawdisk...
	I0115 02:59:26.682180   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Writing magic tar header
	I0115 02:59:26.682191   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Writing SSH key tar header
	I0115 02:59:26.682200   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:26.682145   24145 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02 ...
	I0115 02:59:26.682281   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02
	I0115 02:59:26.682338   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines
	I0115 02:59:26.682352   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02 (perms=drwx------)
	I0115 02:59:26.682366   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines (perms=drwxr-xr-x)
	I0115 02:59:26.682382   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube (perms=drwxr-xr-x)
	I0115 02:59:26.682394   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:59:26.682414   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685
	I0115 02:59:26.682427   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 02:59:26.682436   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins
	I0115 02:59:26.682450   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home
	I0115 02:59:26.682463   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685 (perms=drwxrwxr-x)
	I0115 02:59:26.682474   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Skipping /home - not owner
	I0115 02:59:26.682494   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 02:59:26.682508   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 02:59:26.682524   23809 main.go:141] libmachine: (ha-680410-m02) Creating domain...
	I0115 02:59:26.683307   23809 main.go:141] libmachine: (ha-680410-m02) define libvirt domain using xml: 
	I0115 02:59:26.683331   23809 main.go:141] libmachine: (ha-680410-m02) <domain type='kvm'>
	I0115 02:59:26.683342   23809 main.go:141] libmachine: (ha-680410-m02)   <name>ha-680410-m02</name>
	I0115 02:59:26.683356   23809 main.go:141] libmachine: (ha-680410-m02)   <memory unit='MiB'>2200</memory>
	I0115 02:59:26.683365   23809 main.go:141] libmachine: (ha-680410-m02)   <vcpu>2</vcpu>
	I0115 02:59:26.683376   23809 main.go:141] libmachine: (ha-680410-m02)   <features>
	I0115 02:59:26.683385   23809 main.go:141] libmachine: (ha-680410-m02)     <acpi/>
	I0115 02:59:26.683413   23809 main.go:141] libmachine: (ha-680410-m02)     <apic/>
	I0115 02:59:26.683423   23809 main.go:141] libmachine: (ha-680410-m02)     <pae/>
	I0115 02:59:26.683435   23809 main.go:141] libmachine: (ha-680410-m02)     
	I0115 02:59:26.683449   23809 main.go:141] libmachine: (ha-680410-m02)   </features>
	I0115 02:59:26.683461   23809 main.go:141] libmachine: (ha-680410-m02)   <cpu mode='host-passthrough'>
	I0115 02:59:26.683473   23809 main.go:141] libmachine: (ha-680410-m02)   
	I0115 02:59:26.683485   23809 main.go:141] libmachine: (ha-680410-m02)   </cpu>
	I0115 02:59:26.683513   23809 main.go:141] libmachine: (ha-680410-m02)   <os>
	I0115 02:59:26.683535   23809 main.go:141] libmachine: (ha-680410-m02)     <type>hvm</type>
	I0115 02:59:26.683547   23809 main.go:141] libmachine: (ha-680410-m02)     <boot dev='cdrom'/>
	I0115 02:59:26.683560   23809 main.go:141] libmachine: (ha-680410-m02)     <boot dev='hd'/>
	I0115 02:59:26.683572   23809 main.go:141] libmachine: (ha-680410-m02)     <bootmenu enable='no'/>
	I0115 02:59:26.683586   23809 main.go:141] libmachine: (ha-680410-m02)   </os>
	I0115 02:59:26.683599   23809 main.go:141] libmachine: (ha-680410-m02)   <devices>
	I0115 02:59:26.683612   23809 main.go:141] libmachine: (ha-680410-m02)     <disk type='file' device='cdrom'>
	I0115 02:59:26.683631   23809 main.go:141] libmachine: (ha-680410-m02)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/boot2docker.iso'/>
	I0115 02:59:26.683643   23809 main.go:141] libmachine: (ha-680410-m02)       <target dev='hdc' bus='scsi'/>
	I0115 02:59:26.683658   23809 main.go:141] libmachine: (ha-680410-m02)       <readonly/>
	I0115 02:59:26.683667   23809 main.go:141] libmachine: (ha-680410-m02)     </disk>
	I0115 02:59:26.683674   23809 main.go:141] libmachine: (ha-680410-m02)     <disk type='file' device='disk'>
	I0115 02:59:26.683684   23809 main.go:141] libmachine: (ha-680410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 02:59:26.683692   23809 main.go:141] libmachine: (ha-680410-m02)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/ha-680410-m02.rawdisk'/>
	I0115 02:59:26.683700   23809 main.go:141] libmachine: (ha-680410-m02)       <target dev='hda' bus='virtio'/>
	I0115 02:59:26.683707   23809 main.go:141] libmachine: (ha-680410-m02)     </disk>
	I0115 02:59:26.683718   23809 main.go:141] libmachine: (ha-680410-m02)     <interface type='network'>
	I0115 02:59:26.683737   23809 main.go:141] libmachine: (ha-680410-m02)       <source network='mk-ha-680410'/>
	I0115 02:59:26.683754   23809 main.go:141] libmachine: (ha-680410-m02)       <model type='virtio'/>
	I0115 02:59:26.683767   23809 main.go:141] libmachine: (ha-680410-m02)     </interface>
	I0115 02:59:26.683779   23809 main.go:141] libmachine: (ha-680410-m02)     <interface type='network'>
	I0115 02:59:26.683790   23809 main.go:141] libmachine: (ha-680410-m02)       <source network='default'/>
	I0115 02:59:26.683801   23809 main.go:141] libmachine: (ha-680410-m02)       <model type='virtio'/>
	I0115 02:59:26.683815   23809 main.go:141] libmachine: (ha-680410-m02)     </interface>
	I0115 02:59:26.683831   23809 main.go:141] libmachine: (ha-680410-m02)     <serial type='pty'>
	I0115 02:59:26.683848   23809 main.go:141] libmachine: (ha-680410-m02)       <target port='0'/>
	I0115 02:59:26.683860   23809 main.go:141] libmachine: (ha-680410-m02)     </serial>
	I0115 02:59:26.683873   23809 main.go:141] libmachine: (ha-680410-m02)     <console type='pty'>
	I0115 02:59:26.683881   23809 main.go:141] libmachine: (ha-680410-m02)       <target type='serial' port='0'/>
	I0115 02:59:26.683891   23809 main.go:141] libmachine: (ha-680410-m02)     </console>
	I0115 02:59:26.683908   23809 main.go:141] libmachine: (ha-680410-m02)     <rng model='virtio'>
	I0115 02:59:26.683924   23809 main.go:141] libmachine: (ha-680410-m02)       <backend model='random'>/dev/random</backend>
	I0115 02:59:26.683935   23809 main.go:141] libmachine: (ha-680410-m02)     </rng>
	I0115 02:59:26.683945   23809 main.go:141] libmachine: (ha-680410-m02)     
	I0115 02:59:26.683956   23809 main.go:141] libmachine: (ha-680410-m02)     
	I0115 02:59:26.683965   23809 main.go:141] libmachine: (ha-680410-m02)   </devices>
	I0115 02:59:26.683979   23809 main.go:141] libmachine: (ha-680410-m02) </domain>
	I0115 02:59:26.683995   23809 main.go:141] libmachine: (ha-680410-m02) 
	I0115 02:59:26.690205   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:19:e5:0c in network default
	I0115 02:59:26.690783   23809 main.go:141] libmachine: (ha-680410-m02) Ensuring networks are active...
	I0115 02:59:26.690802   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:26.691366   23809 main.go:141] libmachine: (ha-680410-m02) Ensuring network default is active
	I0115 02:59:26.691764   23809 main.go:141] libmachine: (ha-680410-m02) Ensuring network mk-ha-680410 is active
	I0115 02:59:26.692145   23809 main.go:141] libmachine: (ha-680410-m02) Getting domain xml...
	I0115 02:59:26.692917   23809 main.go:141] libmachine: (ha-680410-m02) Creating domain...
	I0115 02:59:27.858093   23809 main.go:141] libmachine: (ha-680410-m02) Waiting to get IP...
	I0115 02:59:27.858797   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:27.859193   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:27.859250   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:27.859187   24145 retry.go:31] will retry after 260.706878ms: waiting for machine to come up
	I0115 02:59:28.121668   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:28.122089   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:28.122115   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:28.122065   24145 retry.go:31] will retry after 387.419657ms: waiting for machine to come up
	I0115 02:59:28.510532   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:28.510996   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:28.511019   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:28.510934   24145 retry.go:31] will retry after 468.864898ms: waiting for machine to come up
	I0115 02:59:28.981613   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:28.982034   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:28.982058   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:28.981984   24145 retry.go:31] will retry after 575.195399ms: waiting for machine to come up
	I0115 02:59:29.558383   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:29.558883   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:29.558917   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:29.558823   24145 retry.go:31] will retry after 729.236253ms: waiting for machine to come up
	I0115 02:59:30.289099   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:30.289481   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:30.289511   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:30.289428   24145 retry.go:31] will retry after 829.478965ms: waiting for machine to come up
	I0115 02:59:31.121576   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:31.122084   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:31.122114   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:31.122045   24145 retry.go:31] will retry after 1.035714115s: waiting for machine to come up
	I0115 02:59:32.159626   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:32.160096   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:32.160119   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:32.160045   24145 retry.go:31] will retry after 1.19378826s: waiting for machine to come up
	I0115 02:59:33.355434   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:33.355910   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:33.355941   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:33.355853   24145 retry.go:31] will retry after 1.766332935s: waiting for machine to come up
	I0115 02:59:35.124834   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:35.125308   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:35.125347   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:35.125237   24145 retry.go:31] will retry after 2.009274852s: waiting for machine to come up
	I0115 02:59:37.135745   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:37.136228   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:37.136264   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:37.136154   24145 retry.go:31] will retry after 2.052928537s: waiting for machine to come up
	I0115 02:59:39.190454   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:39.191026   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:39.191057   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:39.190965   24145 retry.go:31] will retry after 3.049894642s: waiting for machine to come up
	I0115 02:59:42.242396   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:42.242889   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:42.242918   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:42.242836   24145 retry.go:31] will retry after 3.604090845s: waiting for machine to come up
	I0115 02:59:45.848336   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:45.848726   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:45.848749   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:45.848689   24145 retry.go:31] will retry after 3.507386872s: waiting for machine to come up
	I0115 02:59:49.359121   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.359498   23809 main.go:141] libmachine: (ha-680410-m02) Found IP for machine: 192.168.39.178
	I0115 02:59:49.359525   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has current primary IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.359536   23809 main.go:141] libmachine: (ha-680410-m02) Reserving static IP address...
	I0115 02:59:49.359840   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find host DHCP lease matching {name: "ha-680410-m02", mac: "52:54:00:46:bb:0b", ip: "192.168.39.178"} in network mk-ha-680410
	I0115 02:59:49.428463   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Getting to WaitForSSH function...
	I0115 02:59:49.428496   23809 main.go:141] libmachine: (ha-680410-m02) Reserved static IP address: 192.168.39.178
	I0115 02:59:49.428512   23809 main.go:141] libmachine: (ha-680410-m02) Waiting for SSH to be available...
	I0115 02:59:49.430912   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.431308   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:minikube Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.431333   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.431520   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Using SSH client type: external
	I0115 02:59:49.431541   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa (-rw-------)
	I0115 02:59:49.431562   23809 main.go:141] libmachine: (ha-680410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 02:59:49.431572   23809 main.go:141] libmachine: (ha-680410-m02) DBG | About to run SSH command:
	I0115 02:59:49.431589   23809 main.go:141] libmachine: (ha-680410-m02) DBG | exit 0
	I0115 02:59:49.518740   23809 main.go:141] libmachine: (ha-680410-m02) DBG | SSH cmd err, output: <nil>: 
	I0115 02:59:49.518934   23809 main.go:141] libmachine: (ha-680410-m02) KVM machine creation complete!
	I0115 02:59:49.519240   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetConfigRaw
	I0115 02:59:49.519751   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:49.519975   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:49.520135   23809 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 02:59:49.520152   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 02:59:49.521402   23809 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 02:59:49.521420   23809 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 02:59:49.521429   23809 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 02:59:49.521439   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.523689   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.524022   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.524052   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.524146   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:49.524356   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.524522   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.524668   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:49.524813   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:49.525198   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:49.525213   23809 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 02:59:49.630394   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:59:49.630413   23809 main.go:141] libmachine: Detecting the provisioner...
	I0115 02:59:49.630423   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.632873   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.633244   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.633266   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.633412   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:49.633585   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.633716   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.633820   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:49.633948   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:49.634260   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:49.634275   23809 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 02:59:49.739957   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 02:59:49.740036   23809 main.go:141] libmachine: found compatible host: buildroot
	I0115 02:59:49.740050   23809 main.go:141] libmachine: Provisioning with buildroot...
	I0115 02:59:49.740063   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetMachineName
	I0115 02:59:49.740332   23809 buildroot.go:166] provisioning hostname "ha-680410-m02"
	I0115 02:59:49.740358   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetMachineName
	I0115 02:59:49.740506   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.742938   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.743208   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.743235   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.743374   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:49.743548   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.743702   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.743853   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:49.744018   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:49.744376   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:49.744390   23809 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-680410-m02 && echo "ha-680410-m02" | sudo tee /etc/hostname
	I0115 02:59:49.864408   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-680410-m02
	
	I0115 02:59:49.864442   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.867017   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.867360   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.867381   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.867568   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:49.867763   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.867943   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.868070   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:49.868308   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:49.868646   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:49.868665   23809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-680410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-680410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-680410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 02:59:49.983674   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:59:49.983707   23809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17909-7685/.minikube CaCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17909-7685/.minikube}
	I0115 02:59:49.983726   23809 buildroot.go:174] setting up certificates
	I0115 02:59:49.983736   23809 provision.go:84] configureAuth start
	I0115 02:59:49.983747   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetMachineName
	I0115 02:59:49.984039   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 02:59:49.986567   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.986969   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.987007   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.987138   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.989073   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.989388   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.989424   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.989528   23809 provision.go:143] copyHostCerts
	I0115 02:59:49.989549   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 02:59:49.989574   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem, removing ...
	I0115 02:59:49.989582   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 02:59:49.989654   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem (1078 bytes)
	I0115 02:59:49.989720   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 02:59:49.989740   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem, removing ...
	I0115 02:59:49.989747   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 02:59:49.989769   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem (1123 bytes)
	I0115 02:59:49.989809   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 02:59:49.989824   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem, removing ...
	I0115 02:59:49.989830   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 02:59:49.989850   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem (1679 bytes)
	I0115 02:59:49.989894   23809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem org=jenkins.ha-680410-m02 san=[127.0.0.1 192.168.39.178 ha-680410-m02 localhost minikube]
	I0115 02:59:50.294184   23809 provision.go:177] copyRemoteCerts
	I0115 02:59:50.294238   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 02:59:50.294259   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:50.296954   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.297289   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.297323   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.297435   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.297638   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.297806   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.297994   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	I0115 02:59:50.380228   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 02:59:50.380285   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 02:59:50.402309   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 02:59:50.402372   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0115 02:59:50.423065   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 02:59:50.423112   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 02:59:50.444611   23809 provision.go:87] duration metric: took 460.864546ms to configureAuth
	I0115 02:59:50.444630   23809 buildroot.go:189] setting minikube options for container-runtime
	I0115 02:59:50.444787   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:59:50.444805   23809 main.go:141] libmachine: Checking connection to Docker...
	I0115 02:59:50.444814   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetURL
	I0115 02:59:50.445924   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Using libvirt version 6000000
	I0115 02:59:50.447919   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.448188   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.448223   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.448328   23809 main.go:141] libmachine: Docker is up and running!
	I0115 02:59:50.448341   23809 main.go:141] libmachine: Reticulating splines...
	I0115 02:59:50.448347   23809 client.go:171] duration metric: took 24.100015468s to LocalClient.Create
	I0115 02:59:50.448366   23809 start.go:167] duration metric: took 24.100066383s to libmachine.API.Create "ha-680410"
	I0115 02:59:50.448375   23809 start.go:293] postStartSetup for "ha-680410-m02" (driver="kvm2")
	I0115 02:59:50.448386   23809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 02:59:50.448402   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.448612   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 02:59:50.448631   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:50.450564   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.450922   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.450950   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.451048   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.451195   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.451339   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.451457   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	I0115 02:59:50.536460   23809 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 02:59:50.540499   23809 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 02:59:50.540519   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/addons for local assets ...
	I0115 02:59:50.540584   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/files for local assets ...
	I0115 02:59:50.540674   23809 filesync.go:149] local asset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> 149542.pem in /etc/ssl/certs
	I0115 02:59:50.540687   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /etc/ssl/certs/149542.pem
	I0115 02:59:50.540816   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 02:59:50.548879   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /etc/ssl/certs/149542.pem (1708 bytes)
	I0115 02:59:50.571160   23809 start.go:296] duration metric: took 122.771281ms for postStartSetup
	I0115 02:59:50.571207   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetConfigRaw
	I0115 02:59:50.571783   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 02:59:50.574313   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.574631   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.574657   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.574866   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:59:50.575060   23809 start.go:128] duration metric: took 24.243905256s to createHost
	I0115 02:59:50.575084   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:50.577092   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.577461   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.577503   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.577666   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.577849   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.578008   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.578148   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.578309   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:50.578699   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:50.578713   23809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 02:59:50.688205   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705287590.667352173
	
	I0115 02:59:50.688222   23809 fix.go:216] guest clock: 1705287590.667352173
	I0115 02:59:50.688229   23809 fix.go:229] Guest: 2024-01-15 02:59:50.667352173 +0000 UTC Remote: 2024-01-15 02:59:50.575073246 +0000 UTC m=+82.718607387 (delta=92.278927ms)
	I0115 02:59:50.688242   23809 fix.go:200] guest clock delta is within tolerance: 92.278927ms
	I0115 02:59:50.688247   23809 start.go:83] releasing machines lock for "ha-680410-m02", held for 24.357207925s
	I0115 02:59:50.688267   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.688536   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 02:59:50.691195   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.691525   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.691562   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.694213   23809 out.go:177] * Found network options:
	I0115 02:59:50.695754   23809 out.go:177]   - NO_PROXY=192.168.39.194
	W0115 02:59:50.697133   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 02:59:50.697168   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.697634   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.697795   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.697874   23809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 02:59:50.697912   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	W0115 02:59:50.697997   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 02:59:50.698070   23809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 02:59:50.698094   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:50.700663   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.700683   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.701014   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.701044   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.701075   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.701096   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.701247   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.701354   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.701431   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.701518   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.701536   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.701625   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.701678   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	I0115 02:59:50.701734   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	W0115 02:59:50.801935   23809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 02:59:50.802004   23809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 02:59:50.818063   23809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 02:59:50.818081   23809 start.go:494] detecting cgroup driver to use...
	I0115 02:59:50.818136   23809 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 02:59:50.850336   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 02:59:50.862144   23809 docker.go:217] disabling cri-docker service (if available) ...
	I0115 02:59:50.862188   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 02:59:50.877268   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 02:59:50.890432   23809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 02:59:51.000043   23809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 02:59:51.108329   23809 docker.go:233] disabling docker service ...
	I0115 02:59:51.108385   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 02:59:51.121033   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 02:59:51.132201   23809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 02:59:51.229539   23809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 02:59:51.323601   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 02:59:51.335123   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 02:59:51.351425   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 02:59:51.360848   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 02:59:51.369586   23809 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 02:59:51.369624   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 02:59:51.378610   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:59:51.387425   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 02:59:51.396588   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:59:51.405103   23809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 02:59:51.414220   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 02:59:51.423286   23809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 02:59:51.431336   23809 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 02:59:51.431376   23809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 02:59:51.443990   23809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 02:59:51.452319   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:59:51.556825   23809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 02:59:51.587255   23809 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 02:59:51.587318   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:59:51.592708   23809 retry.go:31] will retry after 1.426358479s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0115 02:59:53.019728   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:59:53.025273   23809 start.go:562] Will wait 60s for crictl version
	I0115 02:59:53.025325   23809 ssh_runner.go:195] Run: which crictl
	I0115 02:59:53.029537   23809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 02:59:53.069082   23809 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0115 02:59:53.069145   23809 ssh_runner.go:195] Run: containerd --version
	I0115 02:59:53.095070   23809 ssh_runner.go:195] Run: containerd --version
	I0115 02:59:53.125400   23809 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0115 02:59:53.127079   23809 out.go:177]   - env NO_PROXY=192.168.39.194
	I0115 02:59:53.128589   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 02:59:53.131271   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:53.131652   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:53.131673   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:53.131848   23809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 02:59:53.135782   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 02:59:53.148234   23809 mustload.go:65] Loading cluster: ha-680410
	I0115 02:59:53.148428   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:59:53.148780   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:53.148813   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:53.163169   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33503
	I0115 02:59:53.163534   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:53.163942   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:53.163964   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:53.164227   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:53.164401   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:59:53.165595   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 02:59:53.165847   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:53.165867   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:53.178905   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I0115 02:59:53.179250   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:53.179649   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:53.179673   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:53.179942   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:53.180118   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:59:53.180274   23809 certs.go:68] Setting up /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410 for IP: 192.168.39.178
	I0115 02:59:53.180286   23809 certs.go:194] generating shared ca certs ...
	I0115 02:59:53.180303   23809 certs.go:226] acquiring lock for ca certs: {Name:mk4b44e68f01694cff17056fe1b88a9d17c4d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:53.180433   23809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key
	I0115 02:59:53.180491   23809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key
	I0115 02:59:53.180504   23809 certs.go:256] generating profile certs ...
	I0115 02:59:53.180600   23809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key
	I0115 02:59:53.180631   23809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.782249d0
	I0115 02:59:53.180651   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.782249d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194 192.168.39.178 192.168.39.254]
	I0115 02:59:53.328651   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.782249d0 ...
	I0115 02:59:53.328673   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.782249d0: {Name:mk17a24c2a124432866ca036d582c795468142b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:53.328814   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.782249d0 ...
	I0115 02:59:53.328826   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.782249d0: {Name:mk6269895e33577cb314f33bcc0b0cb879fcbb31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:53.328891   23809 certs.go:381] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.782249d0 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt
	I0115 02:59:53.328993   23809 certs.go:385] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.782249d0 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key
	I0115 02:59:53.329105   23809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key
	I0115 02:59:53.329119   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 02:59:53.329130   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 02:59:53.329140   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 02:59:53.329150   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 02:59:53.329160   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 02:59:53.329170   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 02:59:53.329180   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 02:59:53.329189   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 02:59:53.329231   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem (1338 bytes)
	W0115 02:59:53.329261   23809 certs.go:480] ignoring /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954_empty.pem, impossibly tiny 0 bytes
	I0115 02:59:53.329270   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 02:59:53.329289   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem (1078 bytes)
	I0115 02:59:53.329310   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem (1123 bytes)
	I0115 02:59:53.329330   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem (1679 bytes)
	I0115 02:59:53.329368   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem (1708 bytes)
	I0115 02:59:53.329394   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:53.329407   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem -> /usr/share/ca-certificates/14954.pem
	I0115 02:59:53.329419   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /usr/share/ca-certificates/149542.pem
	I0115 02:59:53.329447   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:59:53.331989   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:53.332401   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:59:53.332422   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:53.332571   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:59:53.332744   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:59:53.332873   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:59:53.333000   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:59:53.411697   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0115 02:59:53.415973   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0115 02:59:53.426847   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0115 02:59:53.430520   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0115 02:59:53.440838   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0115 02:59:53.444799   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0115 02:59:53.454889   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0115 02:59:53.458873   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0115 02:59:53.472436   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0115 02:59:53.476972   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0115 02:59:53.487216   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0115 02:59:53.491206   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0115 02:59:53.501401   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 02:59:53.525062   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 02:59:53.546761   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 02:59:53.568620   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 02:59:53.589940   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0115 02:59:53.610824   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 02:59:53.631657   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 02:59:53.652570   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 02:59:53.676934   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 02:59:53.700039   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem --> /usr/share/ca-certificates/14954.pem (1338 bytes)
	I0115 02:59:53.722902   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /usr/share/ca-certificates/149542.pem (1708 bytes)
	I0115 02:59:53.746152   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0115 02:59:53.763154   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0115 02:59:53.779772   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0115 02:59:53.795069   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0115 02:59:53.809685   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0115 02:59:53.826297   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0115 02:59:53.842644   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0115 02:59:53.858787   23809 ssh_runner.go:195] Run: openssl version
	I0115 02:59:53.864120   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14954.pem && ln -fs /usr/share/ca-certificates/14954.pem /etc/ssl/certs/14954.pem"
	I0115 02:59:53.875463   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14954.pem
	I0115 02:59:53.879871   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:54 /usr/share/ca-certificates/14954.pem
	I0115 02:59:53.879914   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14954.pem
	I0115 02:59:53.885350   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14954.pem /etc/ssl/certs/51391683.0"
	I0115 02:59:53.896959   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149542.pem && ln -fs /usr/share/ca-certificates/149542.pem /etc/ssl/certs/149542.pem"
	I0115 02:59:53.908090   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149542.pem
	I0115 02:59:53.912514   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 15 02:54 /usr/share/ca-certificates/149542.pem
	I0115 02:59:53.912553   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149542.pem
	I0115 02:59:53.918007   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149542.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 02:59:53.929255   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 02:59:53.940631   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:53.945253   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 15 02:46 /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:53.945291   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:53.951649   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 02:59:53.962958   23809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0115 02:59:53.967057   23809 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0115 02:59:53.967104   23809 kubeadm.go:928] updating node {m02 192.168.39.178 8443 v1.28.4 containerd true true} ...
	I0115 02:59:53.967195   23809 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-680410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0115 02:59:53.967224   23809 kube-vip.go:101] generating kube-vip config ...
	I0115 02:59:53.967257   23809 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_ddns
	      value: "false"
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.6.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0115 02:59:53.967296   23809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 02:59:53.977131   23809 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0115 02:59:53.977172   23809 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0115 02:59:53.986826   23809 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0115 02:59:53.986844   23809 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0115 02:59:53.986858   23809 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0115 02:59:53.986863   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 02:59:53.987026   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 02:59:53.993880   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0115 02:59:53.993903   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0115 03:00:25.831535   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 03:00:25.831622   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 03:00:25.836545   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0115 03:00:25.836588   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0115 03:01:03.555508   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:01:03.571109   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 03:01:03.571218   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 03:01:03.575682   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0115 03:01:03.575715   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0115 03:01:04.047156   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0115 03:01:04.055233   23809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0115 03:01:04.070862   23809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 03:01:04.086351   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1265 bytes)
	I0115 03:01:04.102692   23809 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0115 03:01:04.106268   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 03:01:04.118371   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:01:04.221482   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 03:01:04.238608   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:01:04.238958   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:01:04.238996   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:01:04.253102   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I0115 03:01:04.253478   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:01:04.253904   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:01:04.253924   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:01:04.254250   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:01:04.254446   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:01:04.254593   23809 start.go:316] joinCluster: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 03:01:04.254678   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 03:01:04.254692   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:01:04.257331   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:01:04.257723   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:01:04.257757   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:01:04.257855   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:01:04.258004   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:01:04.258158   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:01:04.258314   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:01:04.445679   23809 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:01:04.445718   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t81wp4.fpof4owqtmts2vhf --discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-680410-m02 --control-plane --apiserver-advertise-address=192.168.39.178 --apiserver-bind-port=8443"
	I0115 03:01:41.573741   23809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t81wp4.fpof4owqtmts2vhf --discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-680410-m02 --control-plane --apiserver-advertise-address=192.168.39.178 --apiserver-bind-port=8443": (37.127963039s)
	I0115 03:01:41.573771   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 03:01:42.042507   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-680410-m02 minikube.k8s.io/updated_at=2024_01_15T03_01_42_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b minikube.k8s.io/name=ha-680410 minikube.k8s.io/primary=false
	I0115 03:01:42.153170   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-680410-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0115 03:01:42.298215   23809 start.go:318] duration metric: took 38.043615498s to joinCluster
	I0115 03:01:42.298299   23809 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:01:42.299915   23809 out.go:177] * Verifying Kubernetes components...
	I0115 03:01:42.298560   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:01:42.301477   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:01:42.494473   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 03:01:42.516088   23809 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 03:01:42.516337   23809 kapi.go:59] client config for ha-680410: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key", CAFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0115 03:01:42.516460   23809 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.194:8443
	I0115 03:01:42.516734   23809 node_ready.go:35] waiting up to 6m0s for node "ha-680410-m02" to be "Ready" ...
	I0115 03:01:42.516846   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:42.516858   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:42.516869   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:42.516882   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:42.527693   23809 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0115 03:01:43.017911   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:43.017935   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:43.017947   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:43.017955   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:43.023042   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:43.516991   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:43.517012   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:43.517020   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:43.517026   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:43.519869   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:44.017047   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:44.017067   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:44.017075   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:44.017081   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:44.020707   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:44.517543   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:44.517563   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:44.517571   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:44.517576   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:44.522683   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:44.523683   23809 node_ready.go:53] node "ha-680410-m02" has status "Ready":"False"
	I0115 03:01:45.017074   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:45.017094   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:45.017102   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:45.017108   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:45.020693   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:45.517922   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:45.517943   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:45.517950   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:45.517957   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:45.521364   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:46.017588   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:46.017609   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:46.017616   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:46.017623   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:46.023096   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:46.517590   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:46.517615   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:46.517623   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:46.517629   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:46.521427   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:47.017600   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:47.017619   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:47.017627   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:47.017633   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:47.021486   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:47.022121   23809 node_ready.go:53] node "ha-680410-m02" has status "Ready":"False"
	I0115 03:01:47.517551   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:47.517571   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:47.517579   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:47.517585   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:47.520938   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:48.017395   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:48.017418   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:48.017430   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:48.017439   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:48.021348   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:48.517140   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:48.517166   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:48.517177   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:48.517187   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:48.520787   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:49.017616   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:49.017636   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:49.017644   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:49.017650   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:49.021413   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:49.517861   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:49.517882   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:49.517891   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:49.517900   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:49.521962   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:49.522653   23809 node_ready.go:53] node "ha-680410-m02" has status "Ready":"False"
	I0115 03:01:50.017028   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:50.017050   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:50.017061   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:50.017068   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:50.020707   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:50.517823   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:50.517845   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:50.517855   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:50.517864   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:50.523559   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:51.017678   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:51.017704   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.017716   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.017726   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.021625   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.516974   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:51.516995   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.517002   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.517008   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.520806   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.521427   23809 node_ready.go:49] node "ha-680410-m02" has status "Ready":"True"
	I0115 03:01:51.521443   23809 node_ready.go:38] duration metric: took 9.004675462s for node "ha-680410-m02" to be "Ready" ...
	I0115 03:01:51.521450   23809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 03:01:51.521496   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:51.521505   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.521511   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.521517   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.527611   23809 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0115 03:01:51.533876   23809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.533942   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-krvzt
	I0115 03:01:51.533950   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.533957   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.533963   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.537020   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.537599   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.537611   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.537619   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.537627   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.540292   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.540805   23809 pod_ready.go:92] pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.540820   23809 pod_ready.go:81] duration metric: took 6.924523ms for pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.540827   23809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.540872   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mqq9g
	I0115 03:01:51.540879   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.540886   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.540892   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.543321   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.543909   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.543923   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.543930   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.543935   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.547116   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.547692   23809 pod_ready.go:92] pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.547706   23809 pod_ready.go:81] duration metric: took 6.874076ms for pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.547714   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.547757   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410
	I0115 03:01:51.547765   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.547771   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.547777   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.550488   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.550965   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.550978   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.550984   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.550990   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.553562   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.554090   23809 pod_ready.go:92] pod "etcd-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.554103   23809 pod_ready.go:81] duration metric: took 6.384351ms for pod "etcd-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.554110   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.554148   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m02
	I0115 03:01:51.554154   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.554161   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.554167   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.556681   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.557359   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:51.557374   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.557384   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.557394   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.559722   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.560215   23809 pod_ready.go:92] pod "etcd-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.560234   23809 pod_ready.go:81] duration metric: took 6.118371ms for pod "etcd-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.560262   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.717617   23809 request.go:629] Waited for 157.297637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410
	I0115 03:01:51.717678   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410
	I0115 03:01:51.717683   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.717691   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.717704   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.722151   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:51.917100   23809 request.go:629] Waited for 194.268802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.917155   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.917164   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.917189   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.917200   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.920421   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.921123   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.921141   23809 pod_ready.go:81] duration metric: took 360.869197ms for pod "kube-apiserver-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.921149   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:52.117323   23809 request.go:629] Waited for 196.116212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:01:52.117385   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:01:52.117392   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.117400   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.117408   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.121525   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:52.317048   23809 request.go:629] Waited for 194.270712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:52.317100   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:52.317113   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.317124   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.317137   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.322580   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:52.517009   23809 request.go:629] Waited for 95.179445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:01:52.517069   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:01:52.517074   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.517082   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.517088   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.520851   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:52.717892   23809 request.go:629] Waited for 196.36141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:52.717965   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:52.717972   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.717983   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.717994   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.721752   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:52.722302   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:52.722320   23809 pod_ready.go:81] duration metric: took 801.163112ms for pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:52.722331   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:52.917542   23809 request.go:629] Waited for 195.153703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410
	I0115 03:01:52.917621   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410
	I0115 03:01:52.917632   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.917644   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.917655   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.921018   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.117005   23809 request.go:629] Waited for 195.158682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:53.117058   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:53.117063   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.117072   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.117081   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.120878   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.121428   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:53.121448   23809 pod_ready.go:81] duration metric: took 399.107978ms for pod "kube-controller-manager-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.121460   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.317099   23809 request.go:629] Waited for 195.562521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m02
	I0115 03:01:53.317157   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m02
	I0115 03:01:53.317163   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.317171   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.317181   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.320265   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.517579   23809 request.go:629] Waited for 196.328604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:53.517647   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:53.517659   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.517666   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.517674   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.520876   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.521331   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:53.521351   23809 pod_ready.go:81] duration metric: took 399.883559ms for pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.521362   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2kmv" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.717539   23809 request.go:629] Waited for 196.108117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2kmv
	I0115 03:01:53.717605   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2kmv
	I0115 03:01:53.717611   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.717619   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.717628   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.722084   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:53.917103   23809 request.go:629] Waited for 194.279747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:53.917165   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:53.917171   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.917178   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.917184   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.920330   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.921239   23809 pod_ready.go:92] pod "kube-proxy-g2kmv" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:53.921256   23809 pod_ready.go:81] duration metric: took 399.88799ms for pod "kube-proxy-g2kmv" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.921268   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hlbjr" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.117368   23809 request.go:629] Waited for 196.040589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hlbjr
	I0115 03:01:54.117467   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hlbjr
	I0115 03:01:54.117489   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.117500   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.117510   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.120934   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:54.318006   23809 request.go:629] Waited for 196.282183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:54.318059   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:54.318064   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.318078   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.318093   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.325742   23809 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0115 03:01:54.326424   23809 pod_ready.go:92] pod "kube-proxy-hlbjr" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:54.326447   23809 pod_ready.go:81] duration metric: took 405.170732ms for pod "kube-proxy-hlbjr" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.326459   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.517523   23809 request.go:629] Waited for 190.982989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410
	I0115 03:01:54.517581   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410
	I0115 03:01:54.517588   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.517597   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.517607   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.522042   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:54.717985   23809 request.go:629] Waited for 195.356286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:54.718040   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:54.718045   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.718052   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.718071   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.722936   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:54.723621   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:54.723639   23809 pod_ready.go:81] duration metric: took 397.170581ms for pod "kube-scheduler-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.723651   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.917818   23809 request.go:629] Waited for 194.098369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m02
	I0115 03:01:54.917900   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m02
	I0115 03:01:54.917911   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.917927   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.917955   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.923613   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:55.117552   23809 request.go:629] Waited for 193.345721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:55.117622   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:55.117629   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.117641   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.117671   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.121591   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:55.122149   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:55.122165   23809 pod_ready.go:81] duration metric: took 398.503462ms for pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:55.122175   23809 pod_ready.go:38] duration metric: took 3.600715297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 03:01:55.122188   23809 api_server.go:52] waiting for apiserver process to appear ...
	I0115 03:01:55.122234   23809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:01:55.136397   23809 api_server.go:72] duration metric: took 12.838062479s to wait for apiserver process to appear ...
	I0115 03:01:55.136419   23809 api_server.go:88] waiting for apiserver healthz status ...
	I0115 03:01:55.136439   23809 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0115 03:01:55.143075   23809 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0115 03:01:55.143145   23809 round_trippers.go:463] GET https://192.168.39.194:8443/version
	I0115 03:01:55.143158   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.143169   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.143182   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.144374   23809 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 03:01:55.144470   23809 api_server.go:141] control plane version: v1.28.4
	I0115 03:01:55.144486   23809 api_server.go:131] duration metric: took 8.061859ms to wait for apiserver health ...
	I0115 03:01:55.144492   23809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 03:01:55.317870   23809 request.go:629] Waited for 173.31696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:55.317925   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:55.317932   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.317942   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.317953   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.323027   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:55.329524   23809 system_pods.go:59] 17 kube-system pods found
	I0115 03:01:55.329550   23809 system_pods.go:61] "coredns-5dd5756b68-krvzt" [9b6c364f-51b5-4b1b-ae00-b4f9c5856796] Running
	I0115 03:01:55.329555   23809 system_pods.go:61] "coredns-5dd5756b68-mqq9g" [d4242838-ba09-41c8-91f0-022e0e69a3e9] Running
	I0115 03:01:55.329559   23809 system_pods.go:61] "etcd-ha-680410" [a546612f-83e1-44a2-baca-35023abbf880] Running
	I0115 03:01:55.329563   23809 system_pods.go:61] "etcd-ha-680410-m02" [f108e244-6b3c-4779-90d5-4b61742e0548] Running
	I0115 03:01:55.329567   23809 system_pods.go:61] "kindnet-jjnbw" [8904ec59-1b44-4318-a26c-38032dbdd9e4] Running
	I0115 03:01:55.329571   23809 system_pods.go:61] "kindnet-qcjzf" [78ad99a5-8d9f-43dd-a4ac-dca4593293f0] Running
	I0115 03:01:55.329575   23809 system_pods.go:61] "kube-apiserver-ha-680410" [eb42bdaa-e121-45ad-846b-b8249abf1fdc] Running
	I0115 03:01:55.329579   23809 system_pods.go:61] "kube-apiserver-ha-680410-m02" [74463f07-8412-4b08-b3f7-34883a658839] Running
	I0115 03:01:55.329585   23809 system_pods.go:61] "kube-controller-manager-ha-680410" [fd1645ee-a6f0-497b-8809-9fab65d06c02] Running
	I0115 03:01:55.329589   23809 system_pods.go:61] "kube-controller-manager-ha-680410-m02" [aa583348-cf73-4963-b8e8-08752ecc8f5d] Running
	I0115 03:01:55.329596   23809 system_pods.go:61] "kube-proxy-g2kmv" [26c4a5f1-238f-46f8-837f-692fc2c6077d] Running
	I0115 03:01:55.329599   23809 system_pods.go:61] "kube-proxy-hlbjr" [3d562b79-f315-40f2-9e01-2603e934b683] Running
	I0115 03:01:55.329603   23809 system_pods.go:61] "kube-scheduler-ha-680410" [d9cf953e-3bb8-4ac4-b881-da730bd0efb0] Running
	I0115 03:01:55.329607   23809 system_pods.go:61] "kube-scheduler-ha-680410-m02" [56d5ea38-f044-415c-95f1-39a55675c267] Running
	I0115 03:01:55.329611   23809 system_pods.go:61] "kube-vip-ha-680410" [91251d4f-f9b3-4ecf-b1c7-fca841dec620] Running
	I0115 03:01:55.329615   23809 system_pods.go:61] "kube-vip-ha-680410-m02" [7558c85b-6238-4ee2-8180-b9f1a0c1270c] Running
	I0115 03:01:55.329619   23809 system_pods.go:61] "storage-provisioner" [f82ce51d-9618-4656-91fa-3f77a60296c6] Running
	I0115 03:01:55.329626   23809 system_pods.go:74] duration metric: took 185.128562ms to wait for pod list to return data ...
	I0115 03:01:55.329632   23809 default_sa.go:34] waiting for default service account to be created ...
	I0115 03:01:55.516977   23809 request.go:629] Waited for 187.282498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/default/serviceaccounts
	I0115 03:01:55.517049   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/default/serviceaccounts
	I0115 03:01:55.517057   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.517064   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.517075   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.520445   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:55.520651   23809 default_sa.go:45] found service account: "default"
	I0115 03:01:55.520668   23809 default_sa.go:55] duration metric: took 191.029405ms for default service account to be created ...
	I0115 03:01:55.520677   23809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 03:01:55.717835   23809 request.go:629] Waited for 197.075281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:55.717916   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:55.717925   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.717934   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.717942   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.723474   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:55.728611   23809 system_pods.go:86] 17 kube-system pods found
	I0115 03:01:55.728646   23809 system_pods.go:89] "coredns-5dd5756b68-krvzt" [9b6c364f-51b5-4b1b-ae00-b4f9c5856796] Running
	I0115 03:01:55.728655   23809 system_pods.go:89] "coredns-5dd5756b68-mqq9g" [d4242838-ba09-41c8-91f0-022e0e69a3e9] Running
	I0115 03:01:55.728665   23809 system_pods.go:89] "etcd-ha-680410" [a546612f-83e1-44a2-baca-35023abbf880] Running
	I0115 03:01:55.728676   23809 system_pods.go:89] "etcd-ha-680410-m02" [f108e244-6b3c-4779-90d5-4b61742e0548] Running
	I0115 03:01:55.728686   23809 system_pods.go:89] "kindnet-jjnbw" [8904ec59-1b44-4318-a26c-38032dbdd9e4] Running
	I0115 03:01:55.728696   23809 system_pods.go:89] "kindnet-qcjzf" [78ad99a5-8d9f-43dd-a4ac-dca4593293f0] Running
	I0115 03:01:55.728703   23809 system_pods.go:89] "kube-apiserver-ha-680410" [eb42bdaa-e121-45ad-846b-b8249abf1fdc] Running
	I0115 03:01:55.728711   23809 system_pods.go:89] "kube-apiserver-ha-680410-m02" [74463f07-8412-4b08-b3f7-34883a658839] Running
	I0115 03:01:55.728718   23809 system_pods.go:89] "kube-controller-manager-ha-680410" [fd1645ee-a6f0-497b-8809-9fab65d06c02] Running
	I0115 03:01:55.728727   23809 system_pods.go:89] "kube-controller-manager-ha-680410-m02" [aa583348-cf73-4963-b8e8-08752ecc8f5d] Running
	I0115 03:01:55.728737   23809 system_pods.go:89] "kube-proxy-g2kmv" [26c4a5f1-238f-46f8-837f-692fc2c6077d] Running
	I0115 03:01:55.728745   23809 system_pods.go:89] "kube-proxy-hlbjr" [3d562b79-f315-40f2-9e01-2603e934b683] Running
	I0115 03:01:55.728752   23809 system_pods.go:89] "kube-scheduler-ha-680410" [d9cf953e-3bb8-4ac4-b881-da730bd0efb0] Running
	I0115 03:01:55.728757   23809 system_pods.go:89] "kube-scheduler-ha-680410-m02" [56d5ea38-f044-415c-95f1-39a55675c267] Running
	I0115 03:01:55.728763   23809 system_pods.go:89] "kube-vip-ha-680410" [91251d4f-f9b3-4ecf-b1c7-fca841dec620] Running
	I0115 03:01:55.728767   23809 system_pods.go:89] "kube-vip-ha-680410-m02" [7558c85b-6238-4ee2-8180-b9f1a0c1270c] Running
	I0115 03:01:55.728773   23809 system_pods.go:89] "storage-provisioner" [f82ce51d-9618-4656-91fa-3f77a60296c6] Running
	I0115 03:01:55.728779   23809 system_pods.go:126] duration metric: took 208.097098ms to wait for k8s-apps to be running ...
	I0115 03:01:55.728788   23809 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 03:01:55.728831   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:01:55.743991   23809 system_svc.go:56] duration metric: took 15.193588ms WaitForService to wait for kubelet
	I0115 03:01:55.744021   23809 kubeadm.go:576] duration metric: took 13.445691685s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 03:01:55.744043   23809 node_conditions.go:102] verifying NodePressure condition ...
	I0115 03:01:55.917453   23809 request.go:629] Waited for 173.328646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes
	I0115 03:01:55.917507   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes
	I0115 03:01:55.917512   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.917519   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.917526   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.921092   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:55.921759   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:01:55.921781   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:01:55.921791   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:01:55.921795   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:01:55.921801   23809 node_conditions.go:105] duration metric: took 177.753297ms to run NodePressure ...
	I0115 03:01:55.921811   23809 start.go:240] waiting for startup goroutines ...
	I0115 03:01:55.921843   23809 start.go:254] writing updated cluster config ...
	I0115 03:01:55.924119   23809 out.go:177] 
	I0115 03:01:55.925733   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:01:55.925825   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 03:01:55.927632   23809 out.go:177] * Starting "ha-680410-m03" control-plane node in "ha-680410" cluster
	I0115 03:01:55.928919   23809 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 03:01:55.928936   23809 cache.go:56] Caching tarball of preloaded images
	I0115 03:01:55.929026   23809 preload.go:173] Found /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 03:01:55.929038   23809 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0115 03:01:55.929119   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 03:01:55.929266   23809 start.go:360] acquireMachinesLock for ha-680410-m03: {Name:mk08ca2fbfa7e17b9b93de9f109025291dd8cd1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 03:01:55.929301   23809 start.go:364] duration metric: took 19.114µs to acquireMachinesLock for "ha-680410-m03"
	I0115 03:01:55.929321   23809 start.go:93] Provisioning new machine with config: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:01:55.929452   23809 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0115 03:01:55.931024   23809 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0115 03:01:55.931106   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:01:55.931138   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:01:55.945054   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I0115 03:01:55.945473   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:01:55.945917   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:01:55.945938   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:01:55.946237   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:01:55.946425   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetMachineName
	I0115 03:01:55.946574   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:01:55.946726   23809 start.go:159] libmachine.API.Create for "ha-680410" (driver="kvm2")
	I0115 03:01:55.946753   23809 client.go:168] LocalClient.Create starting
	I0115 03:01:55.946785   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem
	I0115 03:01:55.946818   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 03:01:55.946832   23809 main.go:141] libmachine: Parsing certificate...
	I0115 03:01:55.946895   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem
	I0115 03:01:55.946921   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 03:01:55.946939   23809 main.go:141] libmachine: Parsing certificate...
	I0115 03:01:55.946970   23809 main.go:141] libmachine: Running pre-create checks...
	I0115 03:01:55.946979   23809 main.go:141] libmachine: (ha-680410-m03) Calling .PreCreateCheck
	I0115 03:01:55.947095   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetConfigRaw
	I0115 03:01:55.947505   23809 main.go:141] libmachine: Creating machine...
	I0115 03:01:55.947519   23809 main.go:141] libmachine: (ha-680410-m03) Calling .Create
	I0115 03:01:55.947665   23809 main.go:141] libmachine: (ha-680410-m03) Creating KVM machine...
	I0115 03:01:55.949020   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found existing default KVM network
	I0115 03:01:55.949143   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found existing private KVM network mk-ha-680410
	I0115 03:01:55.949304   23809 main.go:141] libmachine: (ha-680410-m03) Setting up store path in /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03 ...
	I0115 03:01:55.949334   23809 main.go:141] libmachine: (ha-680410-m03) Building disk image from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 03:01:55.949349   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:55.949253   24660 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 03:01:55.949407   23809 main.go:141] libmachine: (ha-680410-m03) Downloading /home/jenkins/minikube-integration/17909-7685/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 03:01:56.160656   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:56.160528   24660 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa...
	I0115 03:01:56.453479   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:56.453325   24660 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/ha-680410-m03.rawdisk...
	I0115 03:01:56.453518   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Writing magic tar header
	I0115 03:01:56.453536   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Writing SSH key tar header
	I0115 03:01:56.453556   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:56.453451   24660 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03 ...
	I0115 03:01:56.453576   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03
	I0115 03:01:56.453590   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines
	I0115 03:01:56.453608   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 03:01:56.453626   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685
	I0115 03:01:56.453637   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 03:01:56.453643   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins
	I0115 03:01:56.453649   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home
	I0115 03:01:56.453655   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Skipping /home - not owner
	I0115 03:01:56.453857   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03 (perms=drwx------)
	I0115 03:01:56.453892   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines (perms=drwxr-xr-x)
	I0115 03:01:56.453908   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube (perms=drwxr-xr-x)
	I0115 03:01:56.453920   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685 (perms=drwxrwxr-x)
	I0115 03:01:56.453931   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 03:01:56.453946   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 03:01:56.453961   23809 main.go:141] libmachine: (ha-680410-m03) Creating domain...
	I0115 03:01:56.454692   23809 main.go:141] libmachine: (ha-680410-m03) define libvirt domain using xml: 
	I0115 03:01:56.454716   23809 main.go:141] libmachine: (ha-680410-m03) <domain type='kvm'>
	I0115 03:01:56.454727   23809 main.go:141] libmachine: (ha-680410-m03)   <name>ha-680410-m03</name>
	I0115 03:01:56.454738   23809 main.go:141] libmachine: (ha-680410-m03)   <memory unit='MiB'>2200</memory>
	I0115 03:01:56.454750   23809 main.go:141] libmachine: (ha-680410-m03)   <vcpu>2</vcpu>
	I0115 03:01:56.454755   23809 main.go:141] libmachine: (ha-680410-m03)   <features>
	I0115 03:01:56.454762   23809 main.go:141] libmachine: (ha-680410-m03)     <acpi/>
	I0115 03:01:56.454767   23809 main.go:141] libmachine: (ha-680410-m03)     <apic/>
	I0115 03:01:56.454773   23809 main.go:141] libmachine: (ha-680410-m03)     <pae/>
	I0115 03:01:56.454780   23809 main.go:141] libmachine: (ha-680410-m03)     
	I0115 03:01:56.454786   23809 main.go:141] libmachine: (ha-680410-m03)   </features>
	I0115 03:01:56.454798   23809 main.go:141] libmachine: (ha-680410-m03)   <cpu mode='host-passthrough'>
	I0115 03:01:56.454825   23809 main.go:141] libmachine: (ha-680410-m03)   
	I0115 03:01:56.454844   23809 main.go:141] libmachine: (ha-680410-m03)   </cpu>
	I0115 03:01:56.454854   23809 main.go:141] libmachine: (ha-680410-m03)   <os>
	I0115 03:01:56.454861   23809 main.go:141] libmachine: (ha-680410-m03)     <type>hvm</type>
	I0115 03:01:56.454868   23809 main.go:141] libmachine: (ha-680410-m03)     <boot dev='cdrom'/>
	I0115 03:01:56.454878   23809 main.go:141] libmachine: (ha-680410-m03)     <boot dev='hd'/>
	I0115 03:01:56.454884   23809 main.go:141] libmachine: (ha-680410-m03)     <bootmenu enable='no'/>
	I0115 03:01:56.454889   23809 main.go:141] libmachine: (ha-680410-m03)   </os>
	I0115 03:01:56.454895   23809 main.go:141] libmachine: (ha-680410-m03)   <devices>
	I0115 03:01:56.454901   23809 main.go:141] libmachine: (ha-680410-m03)     <disk type='file' device='cdrom'>
	I0115 03:01:56.454912   23809 main.go:141] libmachine: (ha-680410-m03)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/boot2docker.iso'/>
	I0115 03:01:56.454918   23809 main.go:141] libmachine: (ha-680410-m03)       <target dev='hdc' bus='scsi'/>
	I0115 03:01:56.454925   23809 main.go:141] libmachine: (ha-680410-m03)       <readonly/>
	I0115 03:01:56.454940   23809 main.go:141] libmachine: (ha-680410-m03)     </disk>
	I0115 03:01:56.454946   23809 main.go:141] libmachine: (ha-680410-m03)     <disk type='file' device='disk'>
	I0115 03:01:56.454957   23809 main.go:141] libmachine: (ha-680410-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 03:01:56.455025   23809 main.go:141] libmachine: (ha-680410-m03)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/ha-680410-m03.rawdisk'/>
	I0115 03:01:56.455052   23809 main.go:141] libmachine: (ha-680410-m03)       <target dev='hda' bus='virtio'/>
	I0115 03:01:56.455062   23809 main.go:141] libmachine: (ha-680410-m03)     </disk>
	I0115 03:01:56.455079   23809 main.go:141] libmachine: (ha-680410-m03)     <interface type='network'>
	I0115 03:01:56.455094   23809 main.go:141] libmachine: (ha-680410-m03)       <source network='mk-ha-680410'/>
	I0115 03:01:56.455108   23809 main.go:141] libmachine: (ha-680410-m03)       <model type='virtio'/>
	I0115 03:01:56.455121   23809 main.go:141] libmachine: (ha-680410-m03)     </interface>
	I0115 03:01:56.455133   23809 main.go:141] libmachine: (ha-680410-m03)     <interface type='network'>
	I0115 03:01:56.455147   23809 main.go:141] libmachine: (ha-680410-m03)       <source network='default'/>
	I0115 03:01:56.455158   23809 main.go:141] libmachine: (ha-680410-m03)       <model type='virtio'/>
	I0115 03:01:56.455169   23809 main.go:141] libmachine: (ha-680410-m03)     </interface>
	I0115 03:01:56.455181   23809 main.go:141] libmachine: (ha-680410-m03)     <serial type='pty'>
	I0115 03:01:56.455195   23809 main.go:141] libmachine: (ha-680410-m03)       <target port='0'/>
	I0115 03:01:56.455204   23809 main.go:141] libmachine: (ha-680410-m03)     </serial>
	I0115 03:01:56.455218   23809 main.go:141] libmachine: (ha-680410-m03)     <console type='pty'>
	I0115 03:01:56.455231   23809 main.go:141] libmachine: (ha-680410-m03)       <target type='serial' port='0'/>
	I0115 03:01:56.455251   23809 main.go:141] libmachine: (ha-680410-m03)     </console>
	I0115 03:01:56.455262   23809 main.go:141] libmachine: (ha-680410-m03)     <rng model='virtio'>
	I0115 03:01:56.455269   23809 main.go:141] libmachine: (ha-680410-m03)       <backend model='random'>/dev/random</backend>
	I0115 03:01:56.455282   23809 main.go:141] libmachine: (ha-680410-m03)     </rng>
	I0115 03:01:56.455294   23809 main.go:141] libmachine: (ha-680410-m03)     
	I0115 03:01:56.455305   23809 main.go:141] libmachine: (ha-680410-m03)     
	I0115 03:01:56.455316   23809 main.go:141] libmachine: (ha-680410-m03)   </devices>
	I0115 03:01:56.455333   23809 main.go:141] libmachine: (ha-680410-m03) </domain>
	I0115 03:01:56.455351   23809 main.go:141] libmachine: (ha-680410-m03) 
	I0115 03:01:56.462134   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:14:ed:aa in network default
	I0115 03:01:56.462672   23809 main.go:141] libmachine: (ha-680410-m03) Ensuring networks are active...
	I0115 03:01:56.462702   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:56.463363   23809 main.go:141] libmachine: (ha-680410-m03) Ensuring network default is active
	I0115 03:01:56.463743   23809 main.go:141] libmachine: (ha-680410-m03) Ensuring network mk-ha-680410 is active
	I0115 03:01:56.464065   23809 main.go:141] libmachine: (ha-680410-m03) Getting domain xml...
	I0115 03:01:56.464732   23809 main.go:141] libmachine: (ha-680410-m03) Creating domain...
	I0115 03:01:57.688561   23809 main.go:141] libmachine: (ha-680410-m03) Waiting to get IP...
	I0115 03:01:57.689332   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:57.689794   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:57.689824   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:57.689741   24660 retry.go:31] will retry after 283.330091ms: waiting for machine to come up
	I0115 03:01:57.974264   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:57.974705   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:57.974734   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:57.974651   24660 retry.go:31] will retry after 285.927902ms: waiting for machine to come up
	I0115 03:01:58.261924   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:58.262382   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:58.262412   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:58.262337   24660 retry.go:31] will retry after 338.28018ms: waiting for machine to come up
	I0115 03:01:58.601703   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:58.602144   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:58.602173   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:58.602094   24660 retry.go:31] will retry after 442.790409ms: waiting for machine to come up
	I0115 03:01:59.046303   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:59.046656   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:59.046683   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:59.046613   24660 retry.go:31] will retry after 540.553612ms: waiting for machine to come up
	I0115 03:01:59.588416   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:59.588733   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:59.588761   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:59.588700   24660 retry.go:31] will retry after 669.473346ms: waiting for machine to come up
	I0115 03:02:00.259398   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:00.259808   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:00.259837   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:00.259757   24660 retry.go:31] will retry after 819.907617ms: waiting for machine to come up
	I0115 03:02:01.081186   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:01.081616   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:01.081642   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:01.081592   24660 retry.go:31] will retry after 1.093402731s: waiting for machine to come up
	I0115 03:02:02.177200   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:02.177751   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:02.177781   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:02.177698   24660 retry.go:31] will retry after 1.514211711s: waiting for machine to come up
	I0115 03:02:03.694257   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:03.694687   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:03.694717   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:03.694629   24660 retry.go:31] will retry after 1.686814242s: waiting for machine to come up
	I0115 03:02:05.383342   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:05.383759   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:05.383792   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:05.383705   24660 retry.go:31] will retry after 1.928980865s: waiting for machine to come up
	I0115 03:02:07.315251   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:07.315742   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:07.315780   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:07.315683   24660 retry.go:31] will retry after 3.16632128s: waiting for machine to come up
	I0115 03:02:10.484411   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:10.484778   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:10.484801   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:10.484738   24660 retry.go:31] will retry after 3.998322995s: waiting for machine to come up
	I0115 03:02:14.484134   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:14.484565   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:14.484584   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:14.484490   24660 retry.go:31] will retry after 4.72777601s: waiting for machine to come up
	I0115 03:02:19.215650   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.216082   23809 main.go:141] libmachine: (ha-680410-m03) Found IP for machine: 192.168.39.182
	I0115 03:02:19.216110   23809 main.go:141] libmachine: (ha-680410-m03) Reserving static IP address...
	I0115 03:02:19.216126   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has current primary IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.216522   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find host DHCP lease matching {name: "ha-680410-m03", mac: "52:54:00:d4:18:a6", ip: "192.168.39.182"} in network mk-ha-680410
	I0115 03:02:19.286224   23809 main.go:141] libmachine: (ha-680410-m03) Reserved static IP address: 192.168.39.182
	I0115 03:02:19.286250   23809 main.go:141] libmachine: (ha-680410-m03) Waiting for SSH to be available...
	I0115 03:02:19.286263   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Getting to WaitForSSH function...
	I0115 03:02:19.288986   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.289426   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.289458   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.289579   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Using SSH client type: external
	I0115 03:02:19.289602   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa (-rw-------)
	I0115 03:02:19.289645   23809 main.go:141] libmachine: (ha-680410-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 03:02:19.289668   23809 main.go:141] libmachine: (ha-680410-m03) DBG | About to run SSH command:
	I0115 03:02:19.289683   23809 main.go:141] libmachine: (ha-680410-m03) DBG | exit 0
	I0115 03:02:19.386813   23809 main.go:141] libmachine: (ha-680410-m03) DBG | SSH cmd err, output: <nil>: 
	I0115 03:02:19.387022   23809 main.go:141] libmachine: (ha-680410-m03) KVM machine creation complete!
	I0115 03:02:19.387335   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetConfigRaw
	I0115 03:02:19.387861   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:19.388061   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:19.388221   23809 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 03:02:19.388233   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:02:19.389489   23809 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 03:02:19.389505   23809 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 03:02:19.389511   23809 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 03:02:19.389518   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.391638   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.392024   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.392056   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.392217   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.392396   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.392569   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.392700   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.392878   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:19.393225   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:19.393237   23809 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 03:02:19.518200   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 03:02:19.518221   23809 main.go:141] libmachine: Detecting the provisioner...
	I0115 03:02:19.518229   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.520862   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.521192   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.521217   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.521429   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.521611   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.521739   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.521864   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.522038   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:19.522387   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:19.522399   23809 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 03:02:19.651979   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 03:02:19.652046   23809 main.go:141] libmachine: found compatible host: buildroot
	I0115 03:02:19.652060   23809 main.go:141] libmachine: Provisioning with buildroot...
	I0115 03:02:19.652075   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetMachineName
	I0115 03:02:19.652353   23809 buildroot.go:166] provisioning hostname "ha-680410-m03"
	I0115 03:02:19.652382   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetMachineName
	I0115 03:02:19.652562   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.655517   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.656044   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.656074   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.656221   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.656434   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.656622   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.656767   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.656923   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:19.657300   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:19.657314   23809 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-680410-m03 && echo "ha-680410-m03" | sudo tee /etc/hostname
	I0115 03:02:19.799601   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-680410-m03
	
	I0115 03:02:19.799640   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.802372   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.802722   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.802747   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.802921   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.803115   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.803267   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.803410   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.803550   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:19.803854   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:19.803871   23809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-680410-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-680410-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-680410-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 03:02:19.938954   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 03:02:19.938985   23809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17909-7685/.minikube CaCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17909-7685/.minikube}
	I0115 03:02:19.939004   23809 buildroot.go:174] setting up certificates
	I0115 03:02:19.939014   23809 provision.go:84] configureAuth start
	I0115 03:02:19.939027   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetMachineName
	I0115 03:02:19.939320   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:02:19.941872   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.942203   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.942234   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.942368   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.944336   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.944731   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.944756   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.944889   23809 provision.go:143] copyHostCerts
	I0115 03:02:19.944912   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 03:02:19.944940   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem, removing ...
	I0115 03:02:19.944951   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 03:02:19.945012   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem (1123 bytes)
	I0115 03:02:19.945088   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 03:02:19.945105   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem, removing ...
	I0115 03:02:19.945110   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 03:02:19.945135   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem (1679 bytes)
	I0115 03:02:19.945176   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 03:02:19.945191   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem, removing ...
	I0115 03:02:19.945199   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 03:02:19.945222   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem (1078 bytes)
	I0115 03:02:19.945275   23809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem org=jenkins.ha-680410-m03 san=[127.0.0.1 192.168.39.182 ha-680410-m03 localhost minikube]
	I0115 03:02:19.993053   23809 provision.go:177] copyRemoteCerts
	I0115 03:02:19.993096   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 03:02:19.993112   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.995574   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.995947   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.995977   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.996169   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.996338   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.996489   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.996630   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:02:20.089335   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 03:02:20.089415   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 03:02:20.111300   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 03:02:20.111362   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0115 03:02:20.134309   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 03:02:20.134358   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 03:02:20.155039   23809 provision.go:87] duration metric: took 216.011418ms to configureAuth
	I0115 03:02:20.155064   23809 buildroot.go:189] setting minikube options for container-runtime
	I0115 03:02:20.155314   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:02:20.155333   23809 main.go:141] libmachine: Checking connection to Docker...
	I0115 03:02:20.155343   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetURL
	I0115 03:02:20.156466   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Using libvirt version 6000000
	I0115 03:02:20.158686   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.159061   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.159089   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.159250   23809 main.go:141] libmachine: Docker is up and running!
	I0115 03:02:20.159263   23809 main.go:141] libmachine: Reticulating splines...
	I0115 03:02:20.159270   23809 client.go:171] duration metric: took 24.212507222s to LocalClient.Create
	I0115 03:02:20.159306   23809 start.go:167] duration metric: took 24.212577721s to libmachine.API.Create "ha-680410"
	I0115 03:02:20.159318   23809 start.go:293] postStartSetup for "ha-680410-m03" (driver="kvm2")
	I0115 03:02:20.159332   23809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 03:02:20.159362   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.159577   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 03:02:20.159598   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:20.161614   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.162001   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.162027   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.162178   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:20.162363   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.162507   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:20.162649   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:02:20.257880   23809 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 03:02:20.262302   23809 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 03:02:20.262329   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/addons for local assets ...
	I0115 03:02:20.262391   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/files for local assets ...
	I0115 03:02:20.262459   23809 filesync.go:149] local asset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> 149542.pem in /etc/ssl/certs
	I0115 03:02:20.262469   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /etc/ssl/certs/149542.pem
	I0115 03:02:20.262549   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 03:02:20.271448   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /etc/ssl/certs/149542.pem (1708 bytes)
	I0115 03:02:20.292827   23809 start.go:296] duration metric: took 133.498451ms for postStartSetup
	I0115 03:02:20.292874   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetConfigRaw
	I0115 03:02:20.293433   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:02:20.296020   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.296434   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.296467   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.296830   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 03:02:20.296997   23809 start.go:128] duration metric: took 24.36753448s to createHost
	I0115 03:02:20.297017   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:20.299002   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.299316   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.299345   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.299472   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:20.299647   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.299773   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.299869   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:20.300023   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:20.300463   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:20.300478   23809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 03:02:20.431976   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705287740.414243507
	
	I0115 03:02:20.431997   23809 fix.go:216] guest clock: 1705287740.414243507
	I0115 03:02:20.432005   23809 fix.go:229] Guest: 2024-01-15 03:02:20.414243507 +0000 UTC Remote: 2024-01-15 03:02:20.297006622 +0000 UTC m=+232.440540762 (delta=117.236885ms)
	I0115 03:02:20.432022   23809 fix.go:200] guest clock delta is within tolerance: 117.236885ms
	I0115 03:02:20.432029   23809 start.go:83] releasing machines lock for "ha-680410-m03", held for 24.502717337s
	I0115 03:02:20.432055   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.432293   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:02:20.434946   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.435329   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.435357   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.437924   23809 out.go:177] * Found network options:
	I0115 03:02:20.439485   23809 out.go:177]   - NO_PROXY=192.168.39.194,192.168.39.178
	W0115 03:02:20.440783   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	W0115 03:02:20.440802   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 03:02:20.440814   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.441345   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.441521   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.441615   23809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 03:02:20.441651   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	W0115 03:02:20.441747   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	W0115 03:02:20.441763   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 03:02:20.441831   23809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 03:02:20.441852   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:20.444177   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.444569   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.444602   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.444624   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.444804   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:20.444970   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.445089   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.445110   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.445119   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:20.445263   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:20.445274   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:02:20.445375   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.445494   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:20.445606   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	W0115 03:02:20.565161   23809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 03:02:20.565238   23809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 03:02:20.580337   23809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 03:02:20.580364   23809 start.go:494] detecting cgroup driver to use...
	I0115 03:02:20.580425   23809 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 03:02:20.612466   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 03:02:20.624298   23809 docker.go:217] disabling cri-docker service (if available) ...
	I0115 03:02:20.624354   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 03:02:20.637658   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 03:02:20.650766   23809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 03:02:20.764448   23809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 03:02:20.880847   23809 docker.go:233] disabling docker service ...
	I0115 03:02:20.880908   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 03:02:20.896381   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 03:02:20.910450   23809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 03:02:21.015962   23809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 03:02:21.130461   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 03:02:21.143499   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 03:02:21.162509   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 03:02:21.173746   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 03:02:21.184322   23809 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 03:02:21.184381   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 03:02:21.194256   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 03:02:21.203818   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 03:02:21.212928   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 03:02:21.222770   23809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 03:02:21.232037   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 03:02:21.240956   23809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 03:02:21.248671   23809 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 03:02:21.248732   23809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 03:02:21.261150   23809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 03:02:21.269189   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:02:21.385202   23809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 03:02:21.414976   23809 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 03:02:21.415078   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 03:02:21.420123   23809 retry.go:31] will retry after 1.048823659s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0115 03:02:22.469389   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 03:02:22.474867   23809 start.go:562] Will wait 60s for crictl version
	I0115 03:02:22.474916   23809 ssh_runner.go:195] Run: which crictl
	I0115 03:02:22.478743   23809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 03:02:22.524924   23809 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0115 03:02:22.525022   23809 ssh_runner.go:195] Run: containerd --version
	I0115 03:02:22.558246   23809 ssh_runner.go:195] Run: containerd --version
	I0115 03:02:22.593483   23809 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0115 03:02:22.594868   23809 out.go:177]   - env NO_PROXY=192.168.39.194
	I0115 03:02:22.596333   23809 out.go:177]   - env NO_PROXY=192.168.39.194,192.168.39.178
	I0115 03:02:22.597546   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:02:22.600264   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:22.600702   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:22.600720   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:22.600922   23809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 03:02:22.610566   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 03:02:22.625242   23809 mustload.go:65] Loading cluster: ha-680410
	I0115 03:02:22.625522   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:02:22.625910   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:02:22.625950   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:02:22.642170   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0115 03:02:22.642590   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:02:22.643064   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:02:22.643091   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:02:22.643424   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:02:22.643614   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:02:22.645297   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:02:22.645661   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:02:22.645705   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:02:22.661250   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0115 03:02:22.661663   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:02:22.662152   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:02:22.662172   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:02:22.662461   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:02:22.662626   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:02:22.662785   23809 certs.go:68] Setting up /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410 for IP: 192.168.39.182
	I0115 03:02:22.662795   23809 certs.go:194] generating shared ca certs ...
	I0115 03:02:22.662806   23809 certs.go:226] acquiring lock for ca certs: {Name:mk4b44e68f01694cff17056fe1b88a9d17c4d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 03:02:22.662920   23809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key
	I0115 03:02:22.662954   23809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key
	I0115 03:02:22.662963   23809 certs.go:256] generating profile certs ...
	I0115 03:02:22.663026   23809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key
	I0115 03:02:22.663049   23809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.7ea7d8a4
	I0115 03:02:22.663060   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.7ea7d8a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194 192.168.39.178 192.168.39.182 192.168.39.254]
	I0115 03:02:22.879349   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.7ea7d8a4 ...
	I0115 03:02:22.879379   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.7ea7d8a4: {Name:mk2126f339e3e0824b456c72fb72c0e7f9970d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 03:02:22.879575   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.7ea7d8a4 ...
	I0115 03:02:22.879589   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.7ea7d8a4: {Name:mk74bb1ea4d6a89296545545641cdd0e1c436257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 03:02:22.879688   23809 certs.go:381] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.7ea7d8a4 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt
	I0115 03:02:22.879861   23809 certs.go:385] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.7ea7d8a4 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key
	I0115 03:02:22.880054   23809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key
	I0115 03:02:22.880078   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 03:02:22.880105   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 03:02:22.880128   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 03:02:22.880149   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 03:02:22.880170   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 03:02:22.880194   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 03:02:22.880220   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 03:02:22.880241   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 03:02:22.880310   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem (1338 bytes)
	W0115 03:02:22.880354   23809 certs.go:480] ignoring /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954_empty.pem, impossibly tiny 0 bytes
	I0115 03:02:22.880365   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 03:02:22.880387   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem (1078 bytes)
	I0115 03:02:22.880412   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem (1123 bytes)
	I0115 03:02:22.880440   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem (1679 bytes)
	I0115 03:02:22.880483   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem (1708 bytes)
	I0115 03:02:22.880510   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /usr/share/ca-certificates/149542.pem
	I0115 03:02:22.880530   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 03:02:22.880548   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem -> /usr/share/ca-certificates/14954.pem
	I0115 03:02:22.880582   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:02:22.883630   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:02:22.884009   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:02:22.884031   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:02:22.884231   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:02:22.884433   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:02:22.884579   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:02:22.884729   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:02:22.963683   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0115 03:02:22.969225   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0115 03:02:22.981102   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0115 03:02:22.985652   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0115 03:02:22.997376   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0115 03:02:23.002426   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0115 03:02:23.014305   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0115 03:02:23.018135   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0115 03:02:23.031106   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0115 03:02:23.037022   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0115 03:02:23.048982   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0115 03:02:23.052710   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0115 03:02:23.063349   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 03:02:23.086374   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 03:02:23.108596   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 03:02:23.129896   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 03:02:23.151909   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0115 03:02:23.174825   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 03:02:23.197430   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 03:02:23.220501   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 03:02:23.244241   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /usr/share/ca-certificates/149542.pem (1708 bytes)
	I0115 03:02:23.266017   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 03:02:23.288669   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem --> /usr/share/ca-certificates/14954.pem (1338 bytes)
	I0115 03:02:23.312913   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0115 03:02:23.328701   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0115 03:02:23.343776   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0115 03:02:23.359295   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0115 03:02:23.375874   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0115 03:02:23.390951   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0115 03:02:23.406572   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0115 03:02:23.421886   23809 ssh_runner.go:195] Run: openssl version
	I0115 03:02:23.426890   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149542.pem && ln -fs /usr/share/ca-certificates/149542.pem /etc/ssl/certs/149542.pem"
	I0115 03:02:23.437992   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149542.pem
	I0115 03:02:23.442428   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 15 02:54 /usr/share/ca-certificates/149542.pem
	I0115 03:02:23.442471   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149542.pem
	I0115 03:02:23.447720   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149542.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 03:02:23.457428   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 03:02:23.467248   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 03:02:23.471376   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 15 02:46 /usr/share/ca-certificates/minikubeCA.pem
	I0115 03:02:23.471439   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 03:02:23.476583   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 03:02:23.486959   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14954.pem && ln -fs /usr/share/ca-certificates/14954.pem /etc/ssl/certs/14954.pem"
	I0115 03:02:23.497899   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14954.pem
	I0115 03:02:23.502017   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:54 /usr/share/ca-certificates/14954.pem
	I0115 03:02:23.502059   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14954.pem
	I0115 03:02:23.507184   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14954.pem /etc/ssl/certs/51391683.0"
	I0115 03:02:23.517561   23809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0115 03:02:23.521685   23809 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0115 03:02:23.521732   23809 kubeadm.go:928] updating node {m03 192.168.39.182 8443 v1.28.4 containerd true true} ...
	I0115 03:02:23.521807   23809 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-680410-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0115 03:02:23.521830   23809 kube-vip.go:101] generating kube-vip config ...
	I0115 03:02:23.521858   23809 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_ddns
	      value: "false"
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.6.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0115 03:02:23.521886   23809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 03:02:23.530677   23809 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0115 03:02:23.530729   23809 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0115 03:02:23.540741   23809 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0115 03:02:23.540763   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 03:02:23.540763   23809 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0115 03:02:23.540781   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 03:02:23.540741   23809 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0115 03:02:23.540830   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 03:02:23.540876   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:02:23.540832   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 03:02:23.548286   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0115 03:02:23.548311   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0115 03:02:23.548313   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0115 03:02:23.548328   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0115 03:02:23.568645   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 03:02:23.568728   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 03:02:23.631336   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0115 03:02:23.631381   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0115 03:02:24.439936   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0115 03:02:24.448798   23809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0115 03:02:24.464478   23809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 03:02:24.480321   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1265 bytes)
	I0115 03:02:24.495763   23809 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0115 03:02:24.499260   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 03:02:24.510148   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:02:24.615593   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 03:02:24.629982   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:02:24.630417   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:02:24.630468   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:02:24.646588   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0115 03:02:24.647008   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:02:24.647557   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:02:24.647578   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:02:24.647969   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:02:24.648148   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:02:24.648282   23809 start.go:316] joinCluster: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 03:02:24.648428   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 03:02:24.648447   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:02:24.651665   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:02:24.652159   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:02:24.652186   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:02:24.652377   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:02:24.652560   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:02:24.652735   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:02:24.652931   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:02:24.852299   23809 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:02:24.852345   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p6qcyi.4ds32fdjmsrqfkef --discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-680410-m03 --control-plane --apiserver-advertise-address=192.168.39.182 --apiserver-bind-port=8443"
	I0115 03:02:50.660804   23809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p6qcyi.4ds32fdjmsrqfkef --discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-680410-m03 --control-plane --apiserver-advertise-address=192.168.39.182 --apiserver-bind-port=8443": (25.808434304s)
	I0115 03:02:50.660845   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 03:02:51.152135   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-680410-m03 minikube.k8s.io/updated_at=2024_01_15T03_02_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b minikube.k8s.io/name=ha-680410 minikube.k8s.io/primary=false
	I0115 03:02:51.295674   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-680410-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0115 03:02:51.438057   23809 start.go:318] duration metric: took 26.789770985s to joinCluster
	I0115 03:02:51.438129   23809 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:02:51.439768   23809 out.go:177] * Verifying Kubernetes components...
	I0115 03:02:51.438624   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:02:51.441000   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:02:51.637150   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 03:02:51.654178   23809 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 03:02:51.654506   23809 kapi.go:59] client config for ha-680410: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key", CAFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0115 03:02:51.654572   23809 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.194:8443
	I0115 03:02:51.654805   23809 node_ready.go:35] waiting up to 6m0s for node "ha-680410-m03" to be "Ready" ...
	I0115 03:02:51.654887   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:51.654897   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:51.654906   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:51.654916   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:51.658567   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:52.155669   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:52.155694   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:52.155704   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:52.155714   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:52.160286   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:52.655736   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:52.655759   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:52.655770   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:52.655779   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:52.660900   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:02:53.155956   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:53.155975   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:53.155982   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:53.155988   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:53.168792   23809 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0115 03:02:53.655545   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:53.655573   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:53.655585   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:53.655594   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:53.659816   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:53.660674   23809 node_ready.go:53] node "ha-680410-m03" has status "Ready":"False"
	I0115 03:02:54.155050   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:54.155080   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:54.155092   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:54.155101   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:54.158578   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:54.655296   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:54.655315   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:54.655323   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:54.655329   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:54.659140   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:55.155044   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:55.155063   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:55.155070   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:55.155076   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:55.158573   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:55.655042   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:55.655069   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:55.655080   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:55.655089   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:55.659840   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:56.155187   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:56.155218   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:56.155230   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:56.155240   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:56.163530   23809 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0115 03:02:56.164544   23809 node_ready.go:53] node "ha-680410-m03" has status "Ready":"False"
	I0115 03:02:56.655069   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:56.655092   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:56.655100   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:56.655106   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:56.660151   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:02:57.155164   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:57.155187   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:57.155195   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:57.155201   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:57.158602   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:57.655954   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:57.655978   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:57.655989   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:57.655998   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:57.665383   23809 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0115 03:02:58.155431   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:58.155456   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.155469   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.155477   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.159422   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:58.655156   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:58.655179   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.655187   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.655193   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.659921   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:58.660632   23809 node_ready.go:49] node "ha-680410-m03" has status "Ready":"True"
	I0115 03:02:58.660652   23809 node_ready.go:38] duration metric: took 7.00583143s for node "ha-680410-m03" to be "Ready" ...
	I0115 03:02:58.660659   23809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 03:02:58.660727   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:02:58.660738   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.660745   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.660751   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.669926   23809 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0115 03:02:58.677850   23809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.677927   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-krvzt
	I0115 03:02:58.677938   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.677948   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.677958   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.681833   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:58.682521   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:02:58.682535   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.682542   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.682550   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.687066   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:58.687700   23809 pod_ready.go:92] pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace has status "Ready":"True"
	I0115 03:02:58.687721   23809 pod_ready.go:81] duration metric: took 9.848075ms for pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.687732   23809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.687782   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mqq9g
	I0115 03:02:58.687791   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.687797   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.687803   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.690914   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:58.691955   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:02:58.691970   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.691980   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.691988   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.695265   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:58.695684   23809 pod_ready.go:92] pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace has status "Ready":"True"
	I0115 03:02:58.695703   23809 pod_ready.go:81] duration metric: took 7.963099ms for pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.695714   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.695762   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410
	I0115 03:02:58.695771   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.695778   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.695784   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.698607   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:02:58.699061   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:02:58.699073   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.699080   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.699086   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.701654   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:02:58.702116   23809 pod_ready.go:92] pod "etcd-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:02:58.702131   23809 pod_ready.go:81] duration metric: took 6.409578ms for pod "etcd-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.702141   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.702190   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m02
	I0115 03:02:58.702201   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.702212   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.702224   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.704794   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:02:58.705365   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:02:58.705384   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.705395   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.705406   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.707989   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:02:58.708568   23809 pod_ready.go:92] pod "etcd-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:02:58.708583   23809 pod_ready.go:81] duration metric: took 6.433746ms for pod "etcd-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.708590   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.855920   23809 request.go:629] Waited for 147.283954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:02:58.855983   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:02:58.855988   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.855995   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.856001   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.859977   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:59.056067   23809 request.go:629] Waited for 195.181513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.056122   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.056127   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.056134   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.056141   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.059655   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:59.255649   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:02:59.255680   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.255689   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.255695   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.259660   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:59.455741   23809 request.go:629] Waited for 195.394895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.455796   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.455801   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.455809   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.455817   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.459566   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:59.709205   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:02:59.709232   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.709243   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.709251   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.717699   23809 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0115 03:02:59.856077   23809 request.go:629] Waited for 137.322858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.856154   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.856164   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.856174   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.856188   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.860088   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:00.209758   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:00.209778   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:00.209786   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:00.209799   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:00.213287   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:00.255460   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:00.255485   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:00.255493   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:00.255499   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:00.259529   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:00.709379   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:00.709400   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:00.709408   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:00.709414   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:00.713365   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:00.714455   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:00.714475   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:00.714486   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:00.714496   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:00.717621   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:00.718148   23809 pod_ready.go:102] pod "etcd-ha-680410-m03" in "kube-system" namespace has status "Ready":"False"
	I0115 03:03:01.208972   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:01.209002   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:01.209014   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:01.209023   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:01.212393   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:01.213249   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:01.213263   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:01.213270   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:01.213276   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:01.216292   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:03:01.709473   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:01.709493   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:01.709501   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:01.709507   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:01.713625   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:01.714330   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:01.714346   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:01.714354   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:01.714359   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:01.717634   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:02.209600   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:02.209623   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:02.209634   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:02.209643   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:02.213453   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:02.213990   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:02.214006   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:02.214016   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:02.214024   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:02.217508   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:02.709454   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:02.709472   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:02.709480   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:02.709487   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:02.713600   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:02.714525   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:02.714538   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:02.714545   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:02.714551   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:02.717946   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:02.718489   23809 pod_ready.go:102] pod "etcd-ha-680410-m03" in "kube-system" namespace has status "Ready":"False"
	I0115 03:03:03.209066   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:03.209083   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:03.209091   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:03.209098   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:03.212511   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:03.213171   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:03.213186   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:03.213193   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:03.213198   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:03.216363   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:03.709234   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:03.709253   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:03.709261   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:03.709266   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:03.714028   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:03.715385   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:03.715419   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:03.715431   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:03.715441   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:03.718519   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.209296   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:04.209317   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.209325   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.209331   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.213015   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.213947   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:04.213962   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.213972   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.213981   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.217508   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.709174   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:04.709195   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.709203   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.709209   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.712969   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.714285   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:04.714305   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.714315   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.714324   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.718669   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:04.720745   23809 pod_ready.go:92] pod "etcd-ha-680410-m03" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:04.720764   23809 pod_ready.go:81] duration metric: took 6.01216922s for pod "etcd-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.720786   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.720848   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410
	I0115 03:03:04.720861   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.720868   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.720873   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.724905   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:04.725880   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:04.725899   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.725910   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.725920   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.731046   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:03:04.732089   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:04.732114   23809 pod_ready.go:81] duration metric: took 11.320601ms for pod "kube-apiserver-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.732126   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.732196   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:03:04.732206   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.732215   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.732226   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.735489   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.736211   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:04.736227   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.736237   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.736246   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.739273   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.739917   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:04.739934   23809 pod_ready.go:81] duration metric: took 7.79758ms for pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.739945   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.855235   23809 request.go:629] Waited for 115.213898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m03
	I0115 03:03:04.855337   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m03
	I0115 03:03:04.855351   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.855362   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.855375   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.860116   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:05.055822   23809 request.go:629] Waited for 194.866101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:05.055871   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:05.055876   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.055885   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.055891   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.059316   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:05.060070   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410-m03" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:05.060090   23809 pod_ready.go:81] duration metric: took 320.131298ms for pod "kube-apiserver-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.060100   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.256162   23809 request.go:629] Waited for 195.993718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410
	I0115 03:03:05.256224   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410
	I0115 03:03:05.256234   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.256245   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.256257   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.259867   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:05.456024   23809 request.go:629] Waited for 195.357918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:05.456103   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:05.456110   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.456118   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.456124   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.465666   23809 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0115 03:03:05.466604   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:05.466624   23809 pod_ready.go:81] duration metric: took 406.515979ms for pod "kube-controller-manager-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.466638   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.655765   23809 request.go:629] Waited for 189.054178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m02
	I0115 03:03:05.655856   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m02
	I0115 03:03:05.655867   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.655878   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.655891   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.659773   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:05.856199   23809 request.go:629] Waited for 195.368131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:05.856271   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:05.856282   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.856290   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.856298   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.860663   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:05.861676   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:05.861698   23809 pod_ready.go:81] duration metric: took 395.047492ms for pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.861710   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.056149   23809 request.go:629] Waited for 194.375054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m03
	I0115 03:03:06.056256   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m03
	I0115 03:03:06.056267   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.056277   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.056286   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.062639   23809 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0115 03:03:06.255810   23809 request.go:629] Waited for 192.333823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:06.255864   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:06.255871   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.255902   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.255920   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.259723   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:06.260470   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410-m03" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:06.260489   23809 pod_ready.go:81] duration metric: took 398.772097ms for pod "kube-controller-manager-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.260497   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2kmv" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.456117   23809 request.go:629] Waited for 195.537538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2kmv
	I0115 03:03:06.456678   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2kmv
	I0115 03:03:06.456701   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.456715   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.456728   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.467926   23809 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0115 03:03:06.656087   23809 request.go:629] Waited for 187.357695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:06.656154   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:06.656160   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.656167   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.656176   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.660105   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:06.660948   23809 pod_ready.go:92] pod "kube-proxy-g2kmv" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:06.660970   23809 pod_ready.go:81] duration metric: took 400.466795ms for pod "kube-proxy-g2kmv" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.660982   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hlbjr" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.856031   23809 request.go:629] Waited for 194.976379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hlbjr
	I0115 03:03:06.856080   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hlbjr
	I0115 03:03:06.856085   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.856093   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.856102   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.859524   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:07.055617   23809 request.go:629] Waited for 195.176224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:07.055716   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:07.055732   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.055740   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.055749   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.059792   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:07.060769   23809 pod_ready.go:92] pod "kube-proxy-hlbjr" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:07.060788   23809 pod_ready.go:81] duration metric: took 399.798374ms for pod "kube-proxy-hlbjr" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.060801   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zfn27" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.255808   23809 request.go:629] Waited for 194.929509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfn27
	I0115 03:03:07.255891   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfn27
	I0115 03:03:07.255902   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.255910   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.255916   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.259541   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:07.455855   23809 request.go:629] Waited for 195.340547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:07.455928   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:07.455938   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.455946   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.455954   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.459599   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:07.460101   23809 pod_ready.go:92] pod "kube-proxy-zfn27" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:07.460119   23809 pod_ready.go:81] duration metric: took 399.30478ms for pod "kube-proxy-zfn27" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.460132   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.655680   23809 request.go:629] Waited for 195.498701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410
	I0115 03:03:07.655748   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410
	I0115 03:03:07.655761   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.655773   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.655800   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.661613   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:03:07.855559   23809 request.go:629] Waited for 193.344879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:07.855645   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:07.855653   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.855661   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.855667   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.859335   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:07.860134   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:07.860151   23809 pod_ready.go:81] duration metric: took 400.012975ms for pod "kube-scheduler-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.860159   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:08.055960   23809 request.go:629] Waited for 195.744784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m02
	I0115 03:03:08.056037   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m02
	I0115 03:03:08.056042   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.056050   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.056059   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.059487   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:08.256040   23809 request.go:629] Waited for 195.913618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:08.256098   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:08.256116   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.256124   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.256132   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.260329   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:08.260758   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:08.260773   23809 pod_ready.go:81] duration metric: took 400.608211ms for pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:08.260781   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:08.455881   23809 request.go:629] Waited for 195.041096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m03
	I0115 03:03:08.455959   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m03
	I0115 03:03:08.455967   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.455975   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.455981   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.459719   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:08.655658   23809 request.go:629] Waited for 195.368476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:08.655758   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:08.655768   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.655778   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.655788   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.659538   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:08.660203   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410-m03" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:08.660219   23809 pod_ready.go:81] duration metric: took 399.431163ms for pod "kube-scheduler-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:08.660228   23809 pod_ready.go:38] duration metric: took 9.999559937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 03:03:08.660241   23809 api_server.go:52] waiting for apiserver process to appear ...
	I0115 03:03:08.660294   23809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:03:08.676242   23809 api_server.go:72] duration metric: took 17.238083275s to wait for apiserver process to appear ...
	I0115 03:03:08.676264   23809 api_server.go:88] waiting for apiserver healthz status ...
	I0115 03:03:08.676285   23809 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0115 03:03:08.681918   23809 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0115 03:03:08.681988   23809 round_trippers.go:463] GET https://192.168.39.194:8443/version
	I0115 03:03:08.681996   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.682004   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.682010   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.684711   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:03:08.684941   23809 api_server.go:141] control plane version: v1.28.4
	I0115 03:03:08.684959   23809 api_server.go:131] duration metric: took 8.687082ms to wait for apiserver health ...
	I0115 03:03:08.684969   23809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 03:03:08.855274   23809 request.go:629] Waited for 170.245399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:03:08.855352   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:03:08.855361   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.855369   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.855378   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.862812   23809 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0115 03:03:08.869268   23809 system_pods.go:59] 24 kube-system pods found
	I0115 03:03:08.869292   23809 system_pods.go:61] "coredns-5dd5756b68-krvzt" [9b6c364f-51b5-4b1b-ae00-b4f9c5856796] Running
	I0115 03:03:08.869299   23809 system_pods.go:61] "coredns-5dd5756b68-mqq9g" [d4242838-ba09-41c8-91f0-022e0e69a3e9] Running
	I0115 03:03:08.869305   23809 system_pods.go:61] "etcd-ha-680410" [a546612f-83e1-44a2-baca-35023abbf880] Running
	I0115 03:03:08.869311   23809 system_pods.go:61] "etcd-ha-680410-m02" [f108e244-6b3c-4779-90d5-4b61742e0548] Running
	I0115 03:03:08.869322   23809 system_pods.go:61] "etcd-ha-680410-m03" [6b1380a5-d4d8-419a-a84a-b416e3985c86] Running
	I0115 03:03:08.869329   23809 system_pods.go:61] "kindnet-hw4rx" [78ebda65-da09-4808-86d3-2684faf1de94] Running
	I0115 03:03:08.869335   23809 system_pods.go:61] "kindnet-jjnbw" [8904ec59-1b44-4318-a26c-38032dbdd9e4] Running
	I0115 03:03:08.869342   23809 system_pods.go:61] "kindnet-qcjzf" [78ad99a5-8d9f-43dd-a4ac-dca4593293f0] Running
	I0115 03:03:08.869352   23809 system_pods.go:61] "kube-apiserver-ha-680410" [eb42bdaa-e121-45ad-846b-b8249abf1fdc] Running
	I0115 03:03:08.869359   23809 system_pods.go:61] "kube-apiserver-ha-680410-m02" [74463f07-8412-4b08-b3f7-34883a658839] Running
	I0115 03:03:08.869366   23809 system_pods.go:61] "kube-apiserver-ha-680410-m03" [8ba403df-4b0c-4fc3-a48e-457fab2a2f3e] Running
	I0115 03:03:08.869377   23809 system_pods.go:61] "kube-controller-manager-ha-680410" [fd1645ee-a6f0-497b-8809-9fab65d06c02] Running
	I0115 03:03:08.869386   23809 system_pods.go:61] "kube-controller-manager-ha-680410-m02" [aa583348-cf73-4963-b8e8-08752ecc8f5d] Running
	I0115 03:03:08.869396   23809 system_pods.go:61] "kube-controller-manager-ha-680410-m03" [951e61a0-bedd-4a46-8681-a2575b15ae24] Running
	I0115 03:03:08.869406   23809 system_pods.go:61] "kube-proxy-g2kmv" [26c4a5f1-238f-46f8-837f-692fc2c6077d] Running
	I0115 03:03:08.869413   23809 system_pods.go:61] "kube-proxy-hlbjr" [3d562b79-f315-40f2-9e01-2603e934b683] Running
	I0115 03:03:08.869423   23809 system_pods.go:61] "kube-proxy-zfn27" [91166a3e-cfbd-4a52-9816-1be24750df7d] Running
	I0115 03:03:08.869429   23809 system_pods.go:61] "kube-scheduler-ha-680410" [d9cf953e-3bb8-4ac4-b881-da730bd0efb0] Running
	I0115 03:03:08.869439   23809 system_pods.go:61] "kube-scheduler-ha-680410-m02" [56d5ea38-f044-415c-95f1-39a55675c267] Running
	I0115 03:03:08.869446   23809 system_pods.go:61] "kube-scheduler-ha-680410-m03" [cc4bebd0-a36f-4b3c-8783-227bc21a649b] Running
	I0115 03:03:08.869456   23809 system_pods.go:61] "kube-vip-ha-680410" [91251d4f-f9b3-4ecf-b1c7-fca841dec620] Running
	I0115 03:03:08.869463   23809 system_pods.go:61] "kube-vip-ha-680410-m02" [7558c85b-6238-4ee2-8180-b9f1a0c1270c] Running
	I0115 03:03:08.869478   23809 system_pods.go:61] "kube-vip-ha-680410-m03" [5c179856-a694-4cfe-a0fa-2aefaae1c9f4] Running
	I0115 03:03:08.869487   23809 system_pods.go:61] "storage-provisioner" [f82ce51d-9618-4656-91fa-3f77a60296c6] Running
	I0115 03:03:08.869495   23809 system_pods.go:74] duration metric: took 184.518482ms to wait for pod list to return data ...
	I0115 03:03:08.869508   23809 default_sa.go:34] waiting for default service account to be created ...
	I0115 03:03:09.055908   23809 request.go:629] Waited for 186.327345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/default/serviceaccounts
	I0115 03:03:09.055977   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/default/serviceaccounts
	I0115 03:03:09.055982   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:09.055990   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:09.055997   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:09.059760   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:09.059872   23809 default_sa.go:45] found service account: "default"
	I0115 03:03:09.059886   23809 default_sa.go:55] duration metric: took 190.370286ms for default service account to be created ...
	I0115 03:03:09.059893   23809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 03:03:09.256244   23809 request.go:629] Waited for 196.26197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:03:09.256301   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:03:09.256305   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:09.256312   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:09.256318   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:09.264404   23809 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0115 03:03:09.271148   23809 system_pods.go:86] 24 kube-system pods found
	I0115 03:03:09.271168   23809 system_pods.go:89] "coredns-5dd5756b68-krvzt" [9b6c364f-51b5-4b1b-ae00-b4f9c5856796] Running
	I0115 03:03:09.271174   23809 system_pods.go:89] "coredns-5dd5756b68-mqq9g" [d4242838-ba09-41c8-91f0-022e0e69a3e9] Running
	I0115 03:03:09.271179   23809 system_pods.go:89] "etcd-ha-680410" [a546612f-83e1-44a2-baca-35023abbf880] Running
	I0115 03:03:09.271185   23809 system_pods.go:89] "etcd-ha-680410-m02" [f108e244-6b3c-4779-90d5-4b61742e0548] Running
	I0115 03:03:09.271199   23809 system_pods.go:89] "etcd-ha-680410-m03" [6b1380a5-d4d8-419a-a84a-b416e3985c86] Running
	I0115 03:03:09.271206   23809 system_pods.go:89] "kindnet-hw4rx" [78ebda65-da09-4808-86d3-2684faf1de94] Running
	I0115 03:03:09.271214   23809 system_pods.go:89] "kindnet-jjnbw" [8904ec59-1b44-4318-a26c-38032dbdd9e4] Running
	I0115 03:03:09.271220   23809 system_pods.go:89] "kindnet-qcjzf" [78ad99a5-8d9f-43dd-a4ac-dca4593293f0] Running
	I0115 03:03:09.271224   23809 system_pods.go:89] "kube-apiserver-ha-680410" [eb42bdaa-e121-45ad-846b-b8249abf1fdc] Running
	I0115 03:03:09.271233   23809 system_pods.go:89] "kube-apiserver-ha-680410-m02" [74463f07-8412-4b08-b3f7-34883a658839] Running
	I0115 03:03:09.271239   23809 system_pods.go:89] "kube-apiserver-ha-680410-m03" [8ba403df-4b0c-4fc3-a48e-457fab2a2f3e] Running
	I0115 03:03:09.271244   23809 system_pods.go:89] "kube-controller-manager-ha-680410" [fd1645ee-a6f0-497b-8809-9fab65d06c02] Running
	I0115 03:03:09.271250   23809 system_pods.go:89] "kube-controller-manager-ha-680410-m02" [aa583348-cf73-4963-b8e8-08752ecc8f5d] Running
	I0115 03:03:09.271255   23809 system_pods.go:89] "kube-controller-manager-ha-680410-m03" [951e61a0-bedd-4a46-8681-a2575b15ae24] Running
	I0115 03:03:09.271261   23809 system_pods.go:89] "kube-proxy-g2kmv" [26c4a5f1-238f-46f8-837f-692fc2c6077d] Running
	I0115 03:03:09.271265   23809 system_pods.go:89] "kube-proxy-hlbjr" [3d562b79-f315-40f2-9e01-2603e934b683] Running
	I0115 03:03:09.271274   23809 system_pods.go:89] "kube-proxy-zfn27" [91166a3e-cfbd-4a52-9816-1be24750df7d] Running
	I0115 03:03:09.271282   23809 system_pods.go:89] "kube-scheduler-ha-680410" [d9cf953e-3bb8-4ac4-b881-da730bd0efb0] Running
	I0115 03:03:09.271292   23809 system_pods.go:89] "kube-scheduler-ha-680410-m02" [56d5ea38-f044-415c-95f1-39a55675c267] Running
	I0115 03:03:09.271302   23809 system_pods.go:89] "kube-scheduler-ha-680410-m03" [cc4bebd0-a36f-4b3c-8783-227bc21a649b] Running
	I0115 03:03:09.271310   23809 system_pods.go:89] "kube-vip-ha-680410" [91251d4f-f9b3-4ecf-b1c7-fca841dec620] Running
	I0115 03:03:09.271321   23809 system_pods.go:89] "kube-vip-ha-680410-m02" [7558c85b-6238-4ee2-8180-b9f1a0c1270c] Running
	I0115 03:03:09.271327   23809 system_pods.go:89] "kube-vip-ha-680410-m03" [5c179856-a694-4cfe-a0fa-2aefaae1c9f4] Running
	I0115 03:03:09.271331   23809 system_pods.go:89] "storage-provisioner" [f82ce51d-9618-4656-91fa-3f77a60296c6] Running
	I0115 03:03:09.271339   23809 system_pods.go:126] duration metric: took 211.441542ms to wait for k8s-apps to be running ...
	I0115 03:03:09.271348   23809 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 03:03:09.271406   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:03:09.288173   23809 system_svc.go:56] duration metric: took 16.819764ms WaitForService to wait for kubelet
	I0115 03:03:09.288190   23809 kubeadm.go:576] duration metric: took 17.850035064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 03:03:09.288212   23809 node_conditions.go:102] verifying NodePressure condition ...
	I0115 03:03:09.455575   23809 request.go:629] Waited for 167.302649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes
	I0115 03:03:09.455639   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes
	I0115 03:03:09.455647   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:09.455654   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:09.455662   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:09.459355   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:09.461094   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:03:09.461116   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:03:09.461128   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:03:09.461133   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:03:09.461138   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:03:09.461143   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:03:09.461153   23809 node_conditions.go:105] duration metric: took 172.934804ms to run NodePressure ...
	I0115 03:03:09.461176   23809 start.go:240] waiting for startup goroutines ...
	I0115 03:03:09.461204   23809 start.go:254] writing updated cluster config ...
	I0115 03:03:09.461504   23809 ssh_runner.go:195] Run: rm -f paused
	I0115 03:03:09.512874   23809 start.go:599] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 03:03:09.515029   23809 out.go:177] * Done! kubectl is now configured to use "ha-680410" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71db211b3de4a       8c811b4aec35f       3 minutes ago       Running             busybox                   0                   b20d3f5bbcbc1       busybox-5bc68d56bd-g7qsd
	ac4741a7561c0       35d002bc4cbfa       4 minutes ago       Running             kube-vip                  1                   872187e8da13d       kube-vip-ha-680410
	68c9f1e1ac647       6e38f40d628db       4 minutes ago       Running             storage-provisioner       1                   80368b80a4e35       storage-provisioner
	e9b8823a0e760       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       0                   80368b80a4e35       storage-provisioner
	aca78f9075890       ead0a4a53df89       6 minutes ago       Running             coredns                   0                   18f2389d4dfa8       coredns-5dd5756b68-mqq9g
	e087fead2886d       ead0a4a53df89       6 minutes ago       Running             coredns                   0                   967aba65a98b9       coredns-5dd5756b68-krvzt
	ab177d4efea33       c7d1297425461       6 minutes ago       Running             kindnet-cni               0                   5fd8936ddd822       kindnet-jjnbw
	8395447eb2586       83f6cc407eed8       6 minutes ago       Running             kube-proxy                0                   85829bba706f2       kube-proxy-g2kmv
	330d559e17674       35d002bc4cbfa       7 minutes ago       Exited              kube-vip                  0                   872187e8da13d       kube-vip-ha-680410
	ec84efc819d75       73deb9a3f7025       7 minutes ago       Running             etcd                      0                   a9be7b584e2de       etcd-ha-680410
	7fbbef1932aec       e3db313c6dbc0       7 minutes ago       Running             kube-scheduler            0                   9bd06104e2643       kube-scheduler-ha-680410
	877ad092a6d4e       d058aa5ab969c       7 minutes ago       Running             kube-controller-manager   0                   72c0a3ac07595       kube-controller-manager-ha-680410
	7f2ebde9b0057       7fe0e6f37db33       7 minutes ago       Running             kube-apiserver            0                   72653b74ee7a0       kube-apiserver-ha-680410
	
	
	==> containerd <==
	-- Journal begins at Mon 2024-01-15 02:58:39 UTC, ends at Mon 2024-01-15 03:06:15 UTC. --
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.295506437Z" level=info msg="shim disconnected" id=330d559e17674f4b3936e8e4b4c4469ff009671f86c76a5316ca710827bb365f namespace=k8s.io
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.295732325Z" level=warning msg="cleaning up after shim disconnected" id=330d559e17674f4b3936e8e4b4c4469ff009671f86c76a5316ca710827bb365f namespace=k8s.io
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.295847595Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.899387182Z" level=info msg="CreateContainer within sandbox \"872187e8da13da13a80d6169d9f95d068b0a171528a1045606abba0d399932d8\" for container &ContainerMetadata{Name:kube-vip,Attempt:1,}"
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.931237433Z" level=info msg="CreateContainer within sandbox \"872187e8da13da13a80d6169d9f95d068b0a171528a1045606abba0d399932d8\" for &ContainerMetadata{Name:kube-vip,Attempt:1,} returns container id \"ac4741a7561c0e0caddf21a3f098abe3a3b362b568584a136639d569f915ab20\""
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.932117073Z" level=info msg="StartContainer for \"ac4741a7561c0e0caddf21a3f098abe3a3b362b568584a136639d569f915ab20\""
	Jan 15 03:01:37 ha-680410 containerd[688]: time="2024-01-15T03:01:37.431904627Z" level=info msg="StartContainer for \"ac4741a7561c0e0caddf21a3f098abe3a3b362b568584a136639d569f915ab20\" returns successfully"
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.024038211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-5bc68d56bd-g7qsd,Uid:b2908bcd-6b86-4135-b114-1476eafa9743,Namespace:default,Attempt:0,}"
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.128478590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.129055368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.129272659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.129454869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.616476161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-5bc68d56bd-g7qsd,Uid:b2908bcd-6b86-4135-b114-1476eafa9743,Namespace:default,Attempt:0,} returns sandbox id \"b20d3f5bbcbc168c07a056d2b87ec9c958957640ba3d28e79ee3bfc2416f89af\""
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.620890919Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.001759655Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.003321035Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=725937"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.005514468Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.008397799Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.010687806Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.011163755Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 3.390075801s"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.011235838Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.020442637Z" level=info msg="CreateContainer within sandbox \"b20d3f5bbcbc168c07a056d2b87ec9c958957640ba3d28e79ee3bfc2416f89af\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.052560257Z" level=info msg="CreateContainer within sandbox \"b20d3f5bbcbc168c07a056d2b87ec9c958957640ba3d28e79ee3bfc2416f89af\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"71db211b3de4a67dd5f5c66bf81d090cfa90907459d320e1ff52dac5a72999ef\""
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.054438668Z" level=info msg="StartContainer for \"71db211b3de4a67dd5f5c66bf81d090cfa90907459d320e1ff52dac5a72999ef\""
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.145987416Z" level=info msg="StartContainer for \"71db211b3de4a67dd5f5c66bf81d090cfa90907459d320e1ff52dac5a72999ef\" returns successfully"
	
	
	==> coredns [aca78f90758903e3af45c02b6f76ed28b8f2b6ff5dbe5c843fbdce3a6bbf141b] <==
	[INFO] 10.244.1.2:42768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129162s
	[INFO] 10.244.1.2:51972 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001864917s
	[INFO] 10.244.0.4:60075 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277096s
	[INFO] 10.244.0.4:45278 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003237777s
	[INFO] 10.244.0.4:55502 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136484s
	[INFO] 10.244.2.3:46123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124123s
	[INFO] 10.244.2.3:54917 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00044995s
	[INFO] 10.244.2.3:44201 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108123s
	[INFO] 10.244.2.3:43689 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398446s
	[INFO] 10.244.2.3:32811 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092164s
	[INFO] 10.244.1.2:60995 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093737s
	[INFO] 10.244.1.2:41143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00114436s
	[INFO] 10.244.1.2:38837 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189027s
	[INFO] 10.244.0.4:35960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098321s
	[INFO] 10.244.0.4:40580 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131103s
	[INFO] 10.244.2.3:32957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177447s
	[INFO] 10.244.2.3:38476 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126417s
	[INFO] 10.244.2.3:40136 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112359s
	[INFO] 10.244.2.3:48767 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081795s
	[INFO] 10.244.1.2:43184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139657s
	[INFO] 10.244.1.2:34920 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00019567s
	[INFO] 10.244.0.4:34771 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00012137s
	[INFO] 10.244.2.3:59729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144205s
	[INFO] 10.244.2.3:36371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000391863s
	[INFO] 10.244.1.2:38862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151045s
	
	
	==> coredns [e087fead2886d903d40cde823e6b0c43bf07fc180213b27e12a6aa979d3c7013] <==
	[INFO] 10.244.0.4:41716 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000645671s
	[INFO] 10.244.0.4:40319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006606911s
	[INFO] 10.244.0.4:56428 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199319s
	[INFO] 10.244.0.4:49993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137413s
	[INFO] 10.244.0.4:45822 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196298s
	[INFO] 10.244.2.3:49969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001807508s
	[INFO] 10.244.2.3:59754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103654s
	[INFO] 10.244.2.3:46269 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153092s
	[INFO] 10.244.1.2:37428 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189523s
	[INFO] 10.244.1.2:52125 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001784134s
	[INFO] 10.244.1.2:45242 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077649s
	[INFO] 10.244.1.2:51192 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076033s
	[INFO] 10.244.1.2:51654 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148666s
	[INFO] 10.244.0.4:57400 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191186s
	[INFO] 10.244.0.4:58502 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035427s
	[INFO] 10.244.1.2:60504 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145899s
	[INFO] 10.244.1.2:58875 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080658s
	[INFO] 10.244.0.4:35193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178782s
	[INFO] 10.244.0.4:36630 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175501s
	[INFO] 10.244.0.4:44483 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215314s
	[INFO] 10.244.2.3:48596 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116005s
	[INFO] 10.244.2.3:50671 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000334676s
	[INFO] 10.244.1.2:40815 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175593s
	[INFO] 10.244.1.2:43884 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179218s
	[INFO] 10.244.1.2:43516 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000323765s
	
	
	==> describe nodes <==
	Name:               ha-680410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-680410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=ha-680410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T02_59_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 02:59:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-680410
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 03:06:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 03:03:23 +0000   Mon, 15 Jan 2024 02:59:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 03:03:23 +0000   Mon, 15 Jan 2024 02:59:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 03:03:23 +0000   Mon, 15 Jan 2024 02:59:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 03:03:23 +0000   Mon, 15 Jan 2024 02:59:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    ha-680410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fccdf8018b24813b18cf29e87dcf19a
	  System UUID:                4fccdf80-18b2-4813-b18c-f29e87dcf19a
	  Boot ID:                    663c1288-e3cb-4dbf-b88e-8ae64994e27f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-g7qsd             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 coredns-5dd5756b68-krvzt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m50s
	  kube-system                 coredns-5dd5756b68-mqq9g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m50s
	  kube-system                 etcd-ha-680410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m1s
	  kube-system                 kindnet-jjnbw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m50s
	  kube-system                 kube-apiserver-ha-680410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-controller-manager-ha-680410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-proxy-g2kmv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  kube-system                 kube-scheduler-ha-680410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-vip-ha-680410                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m49s  kube-proxy       
	  Normal  Starting                 7m     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m     kubelet          Node ha-680410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m     kubelet          Node ha-680410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m     kubelet          Node ha-680410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m51s  node-controller  Node ha-680410 event: Registered Node ha-680410 in Controller
	  Normal  NodeReady                6m45s  kubelet          Node ha-680410 status is now: NodeReady
	  Normal  RegisteredNode           4m21s  node-controller  Node ha-680410 event: Registered Node ha-680410 in Controller
	  Normal  RegisteredNode           3m11s  node-controller  Node ha-680410 event: Registered Node ha-680410 in Controller
	
	
	Name:               ha-680410-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-680410-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=ha-680410
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T03_01_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 03:01:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-680410-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 03:04:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 15 Jan 2024 03:03:24 +0000   Mon, 15 Jan 2024 03:05:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 15 Jan 2024 03:03:24 +0000   Mon, 15 Jan 2024 03:05:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 15 Jan 2024 03:03:24 +0000   Mon, 15 Jan 2024 03:05:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 15 Jan 2024 03:03:24 +0000   Mon, 15 Jan 2024 03:05:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-680410-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 74d97e54f9884977b04fa0f9a8f6f4bf
	  System UUID:                74d97e54-f988-4977-b04f-a0f9a8f6f4bf
	  Boot ID:                    15c976f5-aa7d-4b1e-bcc6-fad76ebdfe1a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xq99z                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 etcd-ha-680410-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m50s
	  kube-system                 kindnet-qcjzf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m51s
	  kube-system                 kube-apiserver-ha-680410-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-controller-manager-ha-680410-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-proxy-hlbjr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-scheduler-ha-680410-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-vip-ha-680410-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m33s  kube-proxy       
	  Normal  RegisteredNode  4m21s  node-controller  Node ha-680410-m02 event: Registered Node ha-680410-m02 in Controller
	  Normal  RegisteredNode  3m11s  node-controller  Node ha-680410-m02 event: Registered Node ha-680410-m02 in Controller
	  Normal  NodeNotReady    36s    node-controller  Node ha-680410-m02 status is now: NodeNotReady
	
	
	Name:               ha-680410-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-680410-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=ha-680410
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T03_02_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 03:02:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-680410-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 03:06:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 03:03:17 +0000   Mon, 15 Jan 2024 03:02:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 03:03:17 +0000   Mon, 15 Jan 2024 03:02:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 03:03:17 +0000   Mon, 15 Jan 2024 03:02:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 03:03:17 +0000   Mon, 15 Jan 2024 03:02:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-680410-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5e9167a97d54e5784fc7b8cdfd0c427
	  System UUID:                a5e9167a-97d5-4e57-84fc-7b8cdfd0c427
	  Boot ID:                    cc914440-a625-4494-998c-44556ff1dd60
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-h2zgj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 etcd-ha-680410-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-hw4rx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m29s
	  kube-system                 kube-apiserver-ha-680410-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-controller-manager-ha-680410-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-proxy-zfn27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-scheduler-ha-680410-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 kube-vip-ha-680410-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        3m26s  kube-proxy       
	  Normal  RegisteredNode  3m26s  node-controller  Node ha-680410-m03 event: Registered Node ha-680410-m03 in Controller
	  Normal  RegisteredNode  3m26s  node-controller  Node ha-680410-m03 event: Registered Node ha-680410-m03 in Controller
	  Normal  RegisteredNode  3m11s  node-controller  Node ha-680410-m03 event: Registered Node ha-680410-m03 in Controller
	
	
	Name:               ha-680410-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-680410-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=ha-680410
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T03_04_24_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 03:04:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-680410-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 03:06:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 03:04:54 +0000   Mon, 15 Jan 2024 03:04:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 03:04:54 +0000   Mon, 15 Jan 2024 03:04:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 03:04:54 +0000   Mon, 15 Jan 2024 03:04:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 03:04:54 +0000   Mon, 15 Jan 2024 03:04:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-680410-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a96c93c5f4c248fdb2fe3b8a30beaa9c
	  System UUID:                a96c93c5-f4c2-48fd-b2fe-3b8a30beaa9c
	  Boot ID:                    9afbc99b-309c-467a-8dbb-872148a7c4be
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-f7bpb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      113s
	  kube-system                 kube-proxy-5kthb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  113s (x5 over 114s)  kubelet          Node ha-680410-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x5 over 114s)  kubelet          Node ha-680410-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x5 over 114s)  kubelet          Node ha-680410-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           112s                 node-controller  Node ha-680410-m04 event: Registered Node ha-680410-m04 in Controller
	  Normal  RegisteredNode           112s                 node-controller  Node ha-680410-m04 event: Registered Node ha-680410-m04 in Controller
	  Normal  RegisteredNode           112s                 node-controller  Node ha-680410-m04 event: Registered Node ha-680410-m04 in Controller
	  Normal  NodeReady                103s                 kubelet          Node ha-680410-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan15 02:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067614] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.334408] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.238048] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144249] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.062200] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.651400] systemd-fstab-generator[557]: Ignoring "noauto" for root device
	[  +0.109143] systemd-fstab-generator[568]: Ignoring "noauto" for root device
	[  +0.140488] systemd-fstab-generator[582]: Ignoring "noauto" for root device
	[  +0.109723] systemd-fstab-generator[593]: Ignoring "noauto" for root device
	[  +0.226340] systemd-fstab-generator[620]: Ignoring "noauto" for root device
	[  +5.696852] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.696529] systemd-fstab-generator[736]: Ignoring "noauto" for root device
	[Jan15 02:59] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[ +10.813267] systemd-fstab-generator[1362]: Ignoring "noauto" for root device
	[ +17.501730] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [ec84efc819d758b087500345117940598a649b6a4f78324c12cdebe9fc4e3902] <==
	{"level":"warn","ts":"2024-01-15T03:06:15.849429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.857466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.871579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.884246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.888358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.902921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.903767Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.905205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.91246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.916745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.928842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.939548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.946905Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.951073Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.955329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.971437Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.979201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.987323Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.987425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.991478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:15.996708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:16.002458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:16.014164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:16.02266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:06:16.088032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:06:16 up 7 min,  0 users,  load average: 0.18, 0.38, 0.21
	Linux ha-680410 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [ab177d4efea33a14988c75111214a7dafe82cbfa3018bad64936313aa34a7b1c] <==
	I0115 03:05:40.575227       1 main.go:250] Node ha-680410-m04 has CIDR [10.244.3.0/24] 
	I0115 03:05:50.584060       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0115 03:05:50.584109       1 main.go:227] handling current node
	I0115 03:05:50.584128       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0115 03:05:50.584135       1 main.go:250] Node ha-680410-m02 has CIDR [10.244.1.0/24] 
	I0115 03:05:50.584494       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0115 03:05:50.584531       1 main.go:250] Node ha-680410-m03 has CIDR [10.244.2.0/24] 
	I0115 03:05:50.584777       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0115 03:05:50.584861       1 main.go:250] Node ha-680410-m04 has CIDR [10.244.3.0/24] 
	I0115 03:06:00.591302       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0115 03:06:00.591356       1 main.go:227] handling current node
	I0115 03:06:00.591368       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0115 03:06:00.591373       1 main.go:250] Node ha-680410-m02 has CIDR [10.244.1.0/24] 
	I0115 03:06:00.591763       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0115 03:06:00.591841       1 main.go:250] Node ha-680410-m03 has CIDR [10.244.2.0/24] 
	I0115 03:06:00.591923       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0115 03:06:00.592112       1 main.go:250] Node ha-680410-m04 has CIDR [10.244.3.0/24] 
	I0115 03:06:10.604890       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0115 03:06:10.605004       1 main.go:227] handling current node
	I0115 03:06:10.605029       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0115 03:06:10.605036       1 main.go:250] Node ha-680410-m02 has CIDR [10.244.1.0/24] 
	I0115 03:06:10.605340       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0115 03:06:10.605383       1 main.go:250] Node ha-680410-m03 has CIDR [10.244.2.0/24] 
	I0115 03:06:10.605448       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0115 03:06:10.605453       1 main.go:250] Node ha-680410-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a] <==
	Trace[172462707]: [4.956611513s] [4.956611513s] END
	I0115 03:01:40.103276       1 trace.go:236] Trace[1664719019]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:6aebf58f-b5a2-4122-9685-43b6042b1762,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-3epklqu42frowwycwc5b3xum5u,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (15-Jan-2024 03:01:39.602) (total time: 501ms):
	Trace[1664719019]: ["GuaranteedUpdate etcd3" audit-id:6aebf58f-b5a2-4122-9685-43b6042b1762,key:/leases/kube-system/apiserver-3epklqu42frowwycwc5b3xum5u,type:*coordination.Lease,resource:leases.coordination.k8s.io 500ms (03:01:39.602)
	Trace[1664719019]:  ---"Txn call completed" 499ms (03:01:40.103)]
	Trace[1664719019]: [501.090932ms] [501.090932ms] END
	I0115 03:01:40.132646       1 trace.go:236] Trace[1561202185]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3a904893-3107-405e-904c-6f3f5a201318,client:192.168.39.178,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Jan-2024 03:01:33.226) (total time: 6906ms):
	Trace[1561202185]: ["Create etcd3" audit-id:3a904893-3107-405e-904c-6f3f5a201318,key:/pods/kube-system/kube-controller-manager-ha-680410-m02,type:*core.Pod,resource:pods 6895ms (03:01:33.236)
	Trace[1561202185]:  ---"Txn call succeeded" 6856ms (03:01:40.093)]
	Trace[1561202185]: ---"Write to database call failed" len:2375,err:pods "kube-controller-manager-ha-680410-m02" already exists 38ms (03:01:40.132)
	Trace[1561202185]: [6.906197571s] [6.906197571s] END
	I0115 03:01:40.133353       1 trace.go:236] Trace[843603697]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:eb61f224-0bff-4d13-a822-7c6c5684cc21,client:192.168.39.178,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Jan-2024 03:01:33.236) (total time: 6896ms):
	Trace[843603697]: ["Create etcd3" audit-id:eb61f224-0bff-4d13-a822-7c6c5684cc21,key:/pods/kube-system/kube-scheduler-ha-680410-m02,type:*core.Pod,resource:pods 6895ms (03:01:33.237)
	Trace[843603697]:  ---"Txn call succeeded" 6855ms (03:01:40.093)]
	Trace[843603697]: ---"Write to database call failed" len:1220,err:pods "kube-scheduler-ha-680410-m02" already exists 40ms (03:01:40.133)
	Trace[843603697]: [6.896605563s] [6.896605563s] END
	I0115 03:01:40.139727       1 trace.go:236] Trace[1201279704]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:dd127ad8-253d-45ee-a66e-98e9eeb15c77,client:192.168.39.178,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Jan-2024 03:01:33.234) (total time: 6905ms):
	Trace[1201279704]: ["Create etcd3" audit-id:dd127ad8-253d-45ee-a66e-98e9eeb15c77,key:/pods/kube-system/etcd-ha-680410-m02,type:*core.Pod,resource:pods 6901ms (03:01:33.237)
	Trace[1201279704]:  ---"Txn call succeeded" 6858ms (03:01:40.096)]
	Trace[1201279704]: ---"Write to database call failed" len:2214,err:pods "etcd-ha-680410-m02" already exists 42ms (03:01:40.139)
	Trace[1201279704]: [6.905430194s] [6.905430194s] END
	E0115 03:03:47.968507       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.194:47330->192.168.39.194:10250: write: connection reset by peer
	E0115 03:03:48.832093       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.194:52604->192.168.39.182:10250: write: broken pipe
	E0115 03:03:49.454503       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.194:52620->192.168.39.182:10250: write: broken pipe
	E0115 03:03:50.834553       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.39.254:8443->192.168.39.1:54982: write: connection reset by peer
	W0115 03:05:11.296259       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.194]
	
	
	==> kube-controller-manager [877ad092a6d4e30ab9be6b910e6a316a240074794036a32f8a12817c49097d08] <==
	I0115 03:03:47.356250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.065µs"
	E0115 03:04:22.126407       1 certificate_controller.go:146] Sync csr-jfxlq failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-jfxlq": the object has been modified; please apply your changes to the latest version and try again
	I0115 03:04:23.646608       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-680410-m04\" does not exist"
	I0115 03:04:23.693989       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-95xln"
	I0115 03:04:23.694054       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-f7bpb"
	I0115 03:04:23.705433       1 range_allocator.go:380] "Set node PodCIDR" node="ha-680410-m04" podCIDRs=["10.244.3.0/24"]
	I0115 03:04:23.856529       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-qb2ms"
	I0115 03:04:23.912707       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-95xln"
	I0115 03:04:23.930873       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-w6km4"
	I0115 03:04:23.972833       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-5shm7"
	I0115 03:04:24.690247       1 event.go:307] "Event occurred" object="ha-680410-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-680410-m04 event: Registered Node ha-680410-m04 in Controller"
	I0115 03:04:24.711360       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-680410-m04"
	I0115 03:04:33.771036       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-680410-m04"
	I0115 03:05:39.740267       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-680410-m04"
	I0115 03:05:39.740525       1 event.go:307] "Event occurred" object="ha-680410-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-680410-m02 status is now: NodeNotReady"
	I0115 03:05:39.760591       1 event.go:307] "Event occurred" object="kube-system/kindnet-qcjzf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.783095       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.799424       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.816376       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.831430       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.848497       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.860879       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-xq99z" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.882132       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-hlbjr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.925248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.582653ms"
	I0115 03:05:39.925357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.447µs"
	
	
	==> kube-proxy [8395447eb258631ee0befb4adcb4a9be346a5a3f24c2351fc199a299caec200e] <==
	I0115 02:59:26.147708       1 server_others.go:69] "Using iptables proxy"
	I0115 02:59:26.162190       1 node.go:141] Successfully retrieved node IP: 192.168.39.194
	I0115 02:59:26.217320       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 02:59:26.217341       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 02:59:26.223236       1 server_others.go:152] "Using iptables Proxier"
	I0115 02:59:26.223545       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 02:59:26.224394       1 server.go:846] "Version info" version="v1.28.4"
	I0115 02:59:26.224584       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 02:59:26.225617       1 config.go:188] "Starting service config controller"
	I0115 02:59:26.225876       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 02:59:26.226022       1 config.go:97] "Starting endpoint slice config controller"
	I0115 02:59:26.226072       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 02:59:26.228817       1 config.go:315] "Starting node config controller"
	I0115 02:59:26.229012       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 02:59:26.326588       1 shared_informer.go:318] Caches are synced for service config
	I0115 02:59:26.326762       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 02:59:26.337116       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7fbbef1932aecead8bb6799eeb70b29c85e5d27cc193309ee6ae4e88777cc0b5] <==
	I0115 03:03:10.542617       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5bc68d56bd-xhzg2" node="ha-680410-m03"
	E0115 03:03:10.578262       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5bc68d56bd-xq99z\": pod busybox-5bc68d56bd-xq99z is already assigned to node \"ha-680410-m02\"" plugin="DefaultBinder" pod="default/busybox-5bc68d56bd-xq99z" node="ha-680410-m02"
	E0115 03:03:10.578426       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 06f2f28a-9208-4fa7-aff9-5fa40942c0b7(default/busybox-5bc68d56bd-xq99z) wasn't assumed so cannot be forgotten"
	E0115 03:03:10.580286       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5bc68d56bd-xq99z\": pod busybox-5bc68d56bd-xq99z is already assigned to node \"ha-680410-m02\"" pod="default/busybox-5bc68d56bd-xq99z"
	I0115 03:03:10.580482       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5bc68d56bd-xq99z" node="ha-680410-m02"
	E0115 03:03:45.481359       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5bc68d56bd-h2zgj\": pod busybox-5bc68d56bd-h2zgj is already assigned to node \"ha-680410-m03\"" plugin="DefaultBinder" pod="default/busybox-5bc68d56bd-h2zgj" node="ha-680410-m03"
	E0115 03:03:45.481548       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 722c108e-ed9d-4116-89a6-872c6c470ad1(default/busybox-5bc68d56bd-h2zgj) wasn't assumed so cannot be forgotten"
	E0115 03:03:45.481625       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5bc68d56bd-h2zgj\": pod busybox-5bc68d56bd-h2zgj is already assigned to node \"ha-680410-m03\"" pod="default/busybox-5bc68d56bd-h2zgj"
	I0115 03:03:45.481683       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5bc68d56bd-h2zgj" node="ha-680410-m03"
	E0115 03:04:23.731876       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-95xln\": pod kube-proxy-95xln is already assigned to node \"ha-680410-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-95xln" node="ha-680410-m04"
	E0115 03:04:23.734474       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod ab95e21a-71ee-4861-aad4-9bfd0021caea(kube-system/kube-proxy-95xln) wasn't assumed so cannot be forgotten"
	E0115 03:04:23.735022       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-95xln\": pod kube-proxy-95xln is already assigned to node \"ha-680410-m04\"" pod="kube-system/kube-proxy-95xln"
	I0115 03:04:23.735510       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-95xln" node="ha-680410-m04"
	E0115 03:04:23.825216       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qb2ms\": pod kindnet-qb2ms is already assigned to node \"ha-680410-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qb2ms" node="ha-680410-m04"
	E0115 03:04:23.826232       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 7482e17d-b233-41e2-8475-a4cb43663e1c(kube-system/kindnet-qb2ms) wasn't assumed so cannot be forgotten"
	E0115 03:04:23.825914       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5shm7\": pod kube-proxy-5shm7 is already assigned to node \"ha-680410-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5shm7" node="ha-680410-m04"
	E0115 03:04:23.827817       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e80768c7-121a-4da8-9428-b9ee6922e2be(kube-system/kube-proxy-5shm7) wasn't assumed so cannot be forgotten"
	E0115 03:04:23.827777       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qb2ms\": pod kindnet-qb2ms is already assigned to node \"ha-680410-m04\"" pod="kube-system/kindnet-qb2ms"
	I0115 03:04:23.830313       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qb2ms" node="ha-680410-m04"
	E0115 03:04:23.831900       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5shm7\": pod kube-proxy-5shm7 is already assigned to node \"ha-680410-m04\"" pod="kube-system/kube-proxy-5shm7"
	I0115 03:04:23.836812       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5shm7" node="ha-680410-m04"
	E0115 03:04:23.876123       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5kthb\": pod kube-proxy-5kthb is already assigned to node \"ha-680410-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5kthb" node="ha-680410-m04"
	E0115 03:04:23.876829       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e3d21333-880a-4624-8ae4-cc3ee7b558b9(kube-system/kube-proxy-5kthb) wasn't assumed so cannot be forgotten"
	E0115 03:04:23.876920       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5kthb\": pod kube-proxy-5kthb is already assigned to node \"ha-680410-m04\"" pod="kube-system/kube-proxy-5kthb"
	I0115 03:04:23.878223       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5kthb" node="ha-680410-m04"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 02:58:39 UTC, ends at Mon 2024-01-15 03:06:16 UTC. --
	Jan 15 03:03:10 ha-680410 kubelet[1369]: I0115 03:03:10.576607    1369 topology_manager.go:215] "Topology Admit Handler" podUID="bc9bf224-d003-41e8-9cda-ba1d8ae491e3" podNamespace="default" podName="busybox-5bc68d56bd-k7qrp"
	Jan 15 03:03:10 ha-680410 kubelet[1369]: E0115 03:03:10.641579    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-w42zl], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-5bc68d56bd-k7qrp" podUID="bc9bf224-d003-41e8-9cda-ba1d8ae491e3"
	Jan 15 03:03:10 ha-680410 kubelet[1369]: I0115 03:03:10.667358    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w42zl\" (UniqueName: \"kubernetes.io/projected/bc9bf224-d003-41e8-9cda-ba1d8ae491e3-kube-api-access-w42zl\") pod \"busybox-5bc68d56bd-k7qrp\" (UID: \"bc9bf224-d003-41e8-9cda-ba1d8ae491e3\") " pod="default/busybox-5bc68d56bd-k7qrp"
	Jan 15 03:03:10 ha-680410 kubelet[1369]: I0115 03:03:10.713570    1369 topology_manager.go:215] "Topology Admit Handler" podUID="b2908bcd-6b86-4135-b114-1476eafa9743" podNamespace="default" podName="busybox-5bc68d56bd-g7qsd"
	Jan 15 03:03:10 ha-680410 kubelet[1369]: I0115 03:03:10.769528    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njn9g\" (UniqueName: \"kubernetes.io/projected/b2908bcd-6b86-4135-b114-1476eafa9743-kube-api-access-njn9g\") pod \"busybox-5bc68d56bd-g7qsd\" (UID: \"b2908bcd-6b86-4135-b114-1476eafa9743\") " pod="default/busybox-5bc68d56bd-g7qsd"
	Jan 15 03:03:11 ha-680410 kubelet[1369]: I0115 03:03:11.280115    1369 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w42zl\" (UniqueName: \"kubernetes.io/projected/bc9bf224-d003-41e8-9cda-ba1d8ae491e3-kube-api-access-w42zl\") pod \"bc9bf224-d003-41e8-9cda-ba1d8ae491e3\" (UID: \"bc9bf224-d003-41e8-9cda-ba1d8ae491e3\") "
	Jan 15 03:03:11 ha-680410 kubelet[1369]: I0115 03:03:11.293133    1369 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9bf224-d003-41e8-9cda-ba1d8ae491e3-kube-api-access-w42zl" (OuterVolumeSpecName: "kube-api-access-w42zl") pod "bc9bf224-d003-41e8-9cda-ba1d8ae491e3" (UID: "bc9bf224-d003-41e8-9cda-ba1d8ae491e3"). InnerVolumeSpecName "kube-api-access-w42zl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 03:03:11 ha-680410 kubelet[1369]: I0115 03:03:11.381378    1369 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w42zl\" (UniqueName: \"kubernetes.io/projected/bc9bf224-d003-41e8-9cda-ba1d8ae491e3-kube-api-access-w42zl\") on node \"ha-680410\" DevicePath \"\""
	Jan 15 03:03:13 ha-680410 kubelet[1369]: I0115 03:03:13.465208    1369 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bc9bf224-d003-41e8-9cda-ba1d8ae491e3" path="/var/lib/kubelet/pods/bc9bf224-d003-41e8-9cda-ba1d8ae491e3/volumes"
	Jan 15 03:03:15 ha-680410 kubelet[1369]: E0115 03:03:15.514412    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:03:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:03:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:03:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 03:04:15 ha-680410 kubelet[1369]: E0115 03:04:15.510800    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:04:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:04:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:04:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 03:05:15 ha-680410 kubelet[1369]: E0115 03:05:15.514070    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:05:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:05:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:05:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 03:06:15 ha-680410 kubelet[1369]: E0115 03:06:15.515060    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:06:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:06:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:06:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-680410 -n ha-680410
helpers_test.go:261: (dbg) Run:  kubectl --context ha-680410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestHA/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestHA/serial/StopSecondaryNode (81.78s)

                                                
                                    
x
+
TestHA/serial/RestartSecondaryNode (56.92s)

                                                
                                                
=== RUN   TestHA/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 3 (3.19061091s)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:06:20.634087   28259 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:06:20.634373   28259 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:20.634383   28259 out.go:309] Setting ErrFile to fd 2...
	I0115 03:06:20.634388   28259 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:20.634626   28259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:06:20.634863   28259 out.go:303] Setting JSON to false
	I0115 03:06:20.634909   28259 mustload.go:65] Loading cluster: ha-680410
	I0115 03:06:20.635003   28259 notify.go:220] Checking for updates...
	I0115 03:06:20.635493   28259 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:06:20.635514   28259 status.go:255] checking status of ha-680410 ...
	I0115 03:06:20.636291   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:20.636329   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:20.655657   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0115 03:06:20.656082   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:20.656644   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:20.656684   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:20.657084   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:20.657279   28259 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:06:20.659033   28259 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:06:20.659045   28259 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:20.659317   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:20.659359   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:20.673028   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I0115 03:06:20.673360   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:20.673794   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:20.673812   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:20.674095   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:20.674263   28259 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:06:20.677055   28259 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:20.677492   28259 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:20.677526   28259 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:20.677659   28259 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:20.678000   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:20.678032   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:20.691763   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
	I0115 03:06:20.692139   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:20.692612   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:20.692642   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:20.692955   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:20.693098   28259 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:06:20.693295   28259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:20.693332   28259 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:06:20.696204   28259 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:20.696685   28259 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:20.696724   28259 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:20.696839   28259 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:06:20.697012   28259 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:06:20.697192   28259 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:06:20.697356   28259 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:06:20.786710   28259 ssh_runner.go:195] Run: systemctl --version
	I0115 03:06:20.792382   28259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:20.806984   28259 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:20.807007   28259 api_server.go:166] Checking apiserver status ...
	I0115 03:06:20.807040   28259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:20.819688   28259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:06:20.829112   28259 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:06:20.829172   28259 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:06:20.839236   28259 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:20.839257   28259 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:20.846280   28259 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:20.846301   28259 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:06:20.846311   28259 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:20.846336   28259 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:06:20.846735   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:20.846779   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:20.860964   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0115 03:06:20.861320   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:20.861713   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:20.861735   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:20.862054   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:20.862247   28259 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:06:20.863635   28259 status.go:330] ha-680410-m02 host status = "Running" (err=<nil>)
	I0115 03:06:20.863651   28259 host.go:66] Checking if "ha-680410-m02" exists ...
	I0115 03:06:20.863919   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:20.863961   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:20.877353   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0115 03:06:20.877722   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:20.878091   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:20.878117   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:20.878412   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:20.878578   28259 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 03:06:20.881080   28259 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:06:20.881587   28259 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 03:06:20.881612   28259 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:06:20.881771   28259 host.go:66] Checking if "ha-680410-m02" exists ...
	I0115 03:06:20.882045   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:20.882085   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:20.896678   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42061
	I0115 03:06:20.896995   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:20.897399   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:20.897415   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:20.897707   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:20.897839   28259 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 03:06:20.898005   28259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:20.898024   28259 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 03:06:20.900657   28259 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:06:20.901052   28259 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 03:06:20.901088   28259 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:06:20.901248   28259 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 03:06:20.901416   28259 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 03:06:20.901577   28259 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 03:06:20.901707   28259 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	W0115 03:06:23.395688   28259 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.178:22: connect: no route to host
	W0115 03:06:23.395810   28259 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	E0115 03:06:23.395839   28259 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	I0115 03:06:23.395853   28259 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0115 03:06:23.395872   28259 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	I0115 03:06:23.395879   28259 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:23.396208   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:23.396257   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:23.410441   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0115 03:06:23.410844   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:23.411378   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:23.411417   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:23.411703   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:23.411921   28259 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:23.413308   28259 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:23.413325   28259 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:23.413638   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:23.413695   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:23.427104   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33101
	I0115 03:06:23.427460   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:23.427879   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:23.427903   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:23.428187   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:23.428351   28259 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:23.430704   28259 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:23.431070   28259 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:23.431094   28259 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:23.431195   28259 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:23.431594   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:23.431632   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:23.446052   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0115 03:06:23.446471   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:23.446997   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:23.447023   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:23.447325   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:23.447550   28259 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:23.447751   28259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:23.447773   28259 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:23.450955   28259 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:23.451424   28259 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:23.451449   28259 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:23.451607   28259 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:23.451794   28259 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:23.451954   28259 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:23.452189   28259 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:23.554856   28259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:23.569341   28259 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:23.569371   28259 api_server.go:166] Checking apiserver status ...
	I0115 03:06:23.569414   28259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:23.582328   28259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:23.591610   28259 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:23.591668   28259 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:23.602246   28259 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:23.602262   28259 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:23.607182   28259 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:23.607199   28259 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:23.607206   28259 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:23.607218   28259 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:23.607572   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:23.607622   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:23.621791   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0115 03:06:23.622137   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:23.622574   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:23.622594   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:23.622913   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:23.623092   28259 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:23.624650   28259 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:23.624663   28259 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:23.624968   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:23.625009   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:23.638806   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0115 03:06:23.639183   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:23.639578   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:23.639600   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:23.639920   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:23.640080   28259 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:23.642861   28259 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:23.643354   28259 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:23.643403   28259 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:23.643638   28259 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:23.643933   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:23.643981   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:23.657661   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0115 03:06:23.658018   28259 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:23.658415   28259 main.go:141] libmachine: Using API Version  1
	I0115 03:06:23.658438   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:23.658736   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:23.658934   28259 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:23.659089   28259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:23.659114   28259 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:23.661687   28259 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:23.662202   28259 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:23.662255   28259 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:23.662368   28259 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:23.662528   28259 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:23.662700   28259 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:23.662847   28259 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:23.751080   28259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:23.766535   28259 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 3 (4.92632691s)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:06:25.024307   28344 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:06:25.024412   28344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:25.024416   28344 out.go:309] Setting ErrFile to fd 2...
	I0115 03:06:25.024420   28344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:25.024580   28344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:06:25.024733   28344 out.go:303] Setting JSON to false
	I0115 03:06:25.024762   28344 mustload.go:65] Loading cluster: ha-680410
	I0115 03:06:25.024826   28344 notify.go:220] Checking for updates...
	I0115 03:06:25.025198   28344 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:06:25.025214   28344 status.go:255] checking status of ha-680410 ...
	I0115 03:06:25.025640   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:25.025729   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:25.043522   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42193
	I0115 03:06:25.043890   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:25.044375   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:25.044399   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:25.044691   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:25.044807   28344 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:06:25.046446   28344 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:06:25.046467   28344 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:25.046745   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:25.046775   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:25.062178   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43303
	I0115 03:06:25.062494   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:25.062921   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:25.062946   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:25.063234   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:25.063446   28344 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:06:25.065931   28344 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:25.066380   28344 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:25.066410   28344 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:25.066534   28344 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:25.066885   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:25.066920   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:25.080721   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0115 03:06:25.081098   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:25.081493   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:25.081509   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:25.081776   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:25.081930   28344 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:06:25.082200   28344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:25.082224   28344 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:06:25.084563   28344 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:25.085000   28344 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:25.085046   28344 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:25.085173   28344 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:06:25.085360   28344 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:06:25.085523   28344 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:06:25.085667   28344 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:06:25.175131   28344 ssh_runner.go:195] Run: systemctl --version
	I0115 03:06:25.180919   28344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:25.195190   28344 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:25.195218   28344 api_server.go:166] Checking apiserver status ...
	I0115 03:06:25.195242   28344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:25.208693   28344 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:06:25.217908   28344 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:06:25.217979   28344 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:06:25.227077   28344 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:25.227098   28344 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:25.231996   28344 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:25.232019   28344 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:06:25.232030   28344 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:25.232051   28344 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:06:25.232482   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:25.232528   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:25.246534   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45799
	I0115 03:06:25.246916   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:25.247411   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:25.247437   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:25.247782   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:25.247959   28344 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:06:25.249460   28344 status.go:330] ha-680410-m02 host status = "Running" (err=<nil>)
	I0115 03:06:25.249475   28344 host.go:66] Checking if "ha-680410-m02" exists ...
	I0115 03:06:25.249847   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:25.249890   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:25.263749   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0115 03:06:25.264178   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:25.264617   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:25.264636   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:25.264964   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:25.265161   28344 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 03:06:25.267803   28344 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:06:25.268195   28344 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 03:06:25.268216   28344 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:06:25.268348   28344 host.go:66] Checking if "ha-680410-m02" exists ...
	I0115 03:06:25.268630   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:25.268675   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:25.281653   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
	I0115 03:06:25.282013   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:25.282412   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:25.282433   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:25.282717   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:25.282877   28344 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 03:06:25.283028   28344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:25.283049   28344 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 03:06:25.285734   28344 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:06:25.286167   28344 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 03:06:25.286208   28344 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 03:06:25.286355   28344 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 03:06:25.286491   28344 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 03:06:25.286639   28344 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 03:06:25.286785   28344 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	W0115 03:06:26.467649   28344 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.178:22: connect: no route to host
	I0115 03:06:26.467718   28344 retry.go:31] will retry after 194.645226ms: dial tcp 192.168.39.178:22: connect: no route to host
	W0115 03:06:29.539594   28344 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.178:22: connect: no route to host
	W0115 03:06:29.539662   28344 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	E0115 03:06:29.539684   28344 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	I0115 03:06:29.539698   28344 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0115 03:06:29.539721   28344 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.178:22: connect: no route to host
	I0115 03:06:29.539729   28344 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:29.540027   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:29.540103   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:29.555354   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0115 03:06:29.555817   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:29.556307   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:29.556330   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:29.556624   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:29.556787   28344 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:29.558593   28344 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:29.558611   28344 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:29.558999   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:29.559045   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:29.572528   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0115 03:06:29.572858   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:29.573341   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:29.573379   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:29.573674   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:29.573838   28344 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:29.576622   28344 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:29.577012   28344 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:29.577039   28344 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:29.577142   28344 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:29.577432   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:29.577472   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:29.590662   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0115 03:06:29.591051   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:29.591531   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:29.591550   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:29.591858   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:29.592035   28344 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:29.592223   28344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:29.592244   28344 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:29.594641   28344 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:29.595045   28344 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:29.595072   28344 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:29.595169   28344 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:29.595356   28344 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:29.595500   28344 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:29.595633   28344 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:29.687866   28344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:29.702334   28344 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:29.702359   28344 api_server.go:166] Checking apiserver status ...
	I0115 03:06:29.702408   28344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:29.714802   28344 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:29.724139   28344 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:29.724208   28344 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:29.734129   28344 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:29.734155   28344 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:29.739889   28344 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:29.739909   28344 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:29.739918   28344 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:29.739934   28344 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:29.740290   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:29.740328   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:29.755584   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43205
	I0115 03:06:29.755981   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:29.756423   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:29.756451   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:29.756810   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:29.757017   28344 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:29.758547   28344 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:29.758566   28344 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:29.758951   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:29.758990   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:29.772545   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I0115 03:06:29.772907   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:29.773309   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:29.773330   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:29.773598   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:29.773773   28344 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:29.776398   28344 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:29.776837   28344 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:29.776863   28344 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:29.776965   28344 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:29.777274   28344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:29.777312   28344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:29.790413   28344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
	I0115 03:06:29.790721   28344 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:29.791170   28344 main.go:141] libmachine: Using API Version  1
	I0115 03:06:29.791194   28344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:29.791509   28344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:29.791685   28344 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:29.791894   28344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:29.791932   28344 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:29.794348   28344 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:29.794667   28344 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:29.794695   28344 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:29.794836   28344 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:29.795003   28344 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:29.795126   28344 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:29.795221   28344 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:29.882613   28344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:29.895152   28344 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 7 (647.99812ms)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:06:31.515423   28467 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:06:31.515572   28467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:31.515585   28467 out.go:309] Setting ErrFile to fd 2...
	I0115 03:06:31.515593   28467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:31.515793   28467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:06:31.515944   28467 out.go:303] Setting JSON to false
	I0115 03:06:31.515980   28467 mustload.go:65] Loading cluster: ha-680410
	I0115 03:06:31.516093   28467 notify.go:220] Checking for updates...
	I0115 03:06:31.516496   28467 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:06:31.516515   28467 status.go:255] checking status of ha-680410 ...
	I0115 03:06:31.517045   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.517119   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.537037   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0115 03:06:31.537410   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.537916   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.537943   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.538297   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.538513   28467 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:06:31.540227   28467 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:06:31.540243   28467 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:31.540515   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.540550   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.553997   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0115 03:06:31.554402   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.554876   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.554899   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.555249   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.555431   28467 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:06:31.558162   28467 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:31.558648   28467 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:31.558676   28467 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:31.558801   28467 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:31.559083   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.559130   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.573006   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0115 03:06:31.573402   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.573826   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.573849   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.574107   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.574282   28467 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:06:31.574487   28467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:31.574510   28467 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:06:31.577383   28467 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:31.577821   28467 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:31.577870   28467 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:31.577941   28467 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:06:31.578109   28467 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:06:31.578263   28467 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:06:31.578487   28467 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:06:31.667189   28467 ssh_runner.go:195] Run: systemctl --version
	I0115 03:06:31.673014   28467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:31.688093   28467 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:31.688122   28467 api_server.go:166] Checking apiserver status ...
	I0115 03:06:31.688157   28467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:31.700415   28467 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:06:31.710618   28467 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:06:31.710679   28467 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:06:31.720005   28467 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:31.720031   28467 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:31.725030   28467 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:31.725055   28467 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:06:31.725067   28467 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:31.725101   28467 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:06:31.725406   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.725440   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.740859   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38657
	I0115 03:06:31.741246   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.741753   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.741772   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.742136   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.742322   28467 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:06:31.743906   28467 status.go:330] ha-680410-m02 host status = "Stopped" (err=<nil>)
	I0115 03:06:31.743930   28467 status.go:343] host is not running, skipping remaining checks
	I0115 03:06:31.743937   28467 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:31.743962   28467 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:31.744371   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.744412   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.759814   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I0115 03:06:31.760153   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.760627   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.760647   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.761027   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.761224   28467 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:31.762635   28467 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:31.762647   28467 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:31.762970   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.763008   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.776448   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0115 03:06:31.776818   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.777194   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.777218   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.777478   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.777669   28467 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:31.780100   28467 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:31.780488   28467 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:31.780516   28467 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:31.780623   28467 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:31.781015   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.781054   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.794370   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0115 03:06:31.794706   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.795095   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.795126   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.795424   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.795586   28467 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:31.795752   28467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:31.795774   28467 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:31.798084   28467 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:31.798524   28467 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:31.798558   28467 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:31.798668   28467 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:31.798851   28467 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:31.799009   28467 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:31.799159   28467 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:31.890850   28467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:31.906967   28467 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:31.906996   28467 api_server.go:166] Checking apiserver status ...
	I0115 03:06:31.907035   28467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:31.919517   28467 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:31.930125   28467 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:31.930179   28467 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:31.941542   28467 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:31.941565   28467 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:31.946453   28467 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:31.946471   28467 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:31.946477   28467 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:31.946489   28467 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:31.946796   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.946838   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.961049   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0115 03:06:31.961399   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.961815   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.961840   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.962176   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.962375   28467 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:31.963946   28467 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:31.963962   28467 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:31.964253   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.964285   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.978450   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0115 03:06:31.978833   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.979202   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.979221   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.979550   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.979722   28467 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:31.982461   28467 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:31.982967   28467 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:31.983006   28467 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:31.983162   28467 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:31.983450   28467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:31.983489   28467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:31.997752   28467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38883
	I0115 03:06:31.998092   28467 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:31.998543   28467 main.go:141] libmachine: Using API Version  1
	I0115 03:06:31.998569   28467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:31.998897   28467 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:31.999060   28467 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:31.999271   28467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:31.999289   28467 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:32.001944   28467 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:32.002416   28467 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:32.002442   28467 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:32.002541   28467 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:32.002700   28467 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:32.002872   28467 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:32.003046   28467 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:32.090161   28467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:32.102103   28467 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 7 (668.544255ms)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:06:34.594160   28539 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:06:34.594308   28539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:34.594322   28539 out.go:309] Setting ErrFile to fd 2...
	I0115 03:06:34.594330   28539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:34.594500   28539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:06:34.594674   28539 out.go:303] Setting JSON to false
	I0115 03:06:34.594708   28539 mustload.go:65] Loading cluster: ha-680410
	I0115 03:06:34.594754   28539 notify.go:220] Checking for updates...
	I0115 03:06:34.595176   28539 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:06:34.595194   28539 status.go:255] checking status of ha-680410 ...
	I0115 03:06:34.595845   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:34.595883   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:34.617245   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0115 03:06:34.617659   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:34.618208   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:34.618227   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:34.618541   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:34.618737   28539 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:06:34.620305   28539 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:06:34.620318   28539 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:34.620575   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:34.620606   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:34.634439   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I0115 03:06:34.634784   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:34.635320   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:34.635351   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:34.635643   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:34.635867   28539 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:06:34.638460   28539 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:34.638942   28539 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:34.638975   28539 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:34.639142   28539 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:34.639465   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:34.639503   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:34.654216   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
	I0115 03:06:34.654653   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:34.655060   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:34.655077   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:34.655348   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:34.655565   28539 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:06:34.655764   28539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:34.655783   28539 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:06:34.658262   28539 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:34.658643   28539 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:34.658683   28539 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:34.658780   28539 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:06:34.658946   28539 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:06:34.659056   28539 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:06:34.659192   28539 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:06:34.756896   28539 ssh_runner.go:195] Run: systemctl --version
	I0115 03:06:34.764138   28539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:34.777948   28539 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:34.777973   28539 api_server.go:166] Checking apiserver status ...
	I0115 03:06:34.778003   28539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:34.790109   28539 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:06:34.797956   28539 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:06:34.798016   28539 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:06:34.807735   28539 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:34.807759   28539 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:34.814818   28539 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:34.814839   28539 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:06:34.814846   28539 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:34.814861   28539 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:06:34.815153   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:34.815200   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:34.829168   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0115 03:06:34.829543   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:34.829993   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:34.830012   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:34.830305   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:34.830495   28539 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:06:34.831945   28539 status.go:330] ha-680410-m02 host status = "Stopped" (err=<nil>)
	I0115 03:06:34.831960   28539 status.go:343] host is not running, skipping remaining checks
	I0115 03:06:34.831965   28539 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:34.831984   28539 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:34.832266   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:34.832296   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:34.845815   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0115 03:06:34.846136   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:34.846543   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:34.846563   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:34.846866   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:34.847050   28539 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:34.848616   28539 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:34.848640   28539 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:34.848933   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:34.848972   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:34.862070   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I0115 03:06:34.862415   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:34.862812   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:34.862832   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:34.863101   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:34.863261   28539 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:34.865572   28539 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:34.865970   28539 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:34.865998   28539 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:34.866156   28539 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:34.866478   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:34.866521   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:34.879775   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40489
	I0115 03:06:34.880139   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:34.880562   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:34.880582   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:34.880882   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:34.881036   28539 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:34.881224   28539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:34.881243   28539 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:34.883844   28539 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:34.884258   28539 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:34.884297   28539 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:34.884408   28539 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:34.884569   28539 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:34.884708   28539 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:34.884835   28539 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:34.979327   28539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:34.996246   28539 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:34.996269   28539 api_server.go:166] Checking apiserver status ...
	I0115 03:06:34.996295   28539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:35.012611   28539 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:35.029633   28539 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:35.029691   28539 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:35.040587   28539 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:35.040610   28539 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:35.046007   28539 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:35.046027   28539 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:35.046034   28539 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:35.046046   28539 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:35.046378   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:35.046422   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:35.060905   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0115 03:06:35.061357   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:35.061917   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:35.061941   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:35.062250   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:35.062431   28539 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:35.064091   28539 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:35.064105   28539 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:35.064387   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:35.064426   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:35.079645   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0115 03:06:35.080050   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:35.080527   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:35.080555   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:35.080876   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:35.081110   28539 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:35.084287   28539 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:35.084700   28539 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:35.084735   28539 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:35.084862   28539 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:35.085270   28539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:35.085317   28539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:35.099270   28539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42979
	I0115 03:06:35.099716   28539 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:35.100207   28539 main.go:141] libmachine: Using API Version  1
	I0115 03:06:35.100232   28539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:35.100582   28539 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:35.100820   28539 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:35.101019   28539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:35.101036   28539 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:35.104093   28539 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:35.104537   28539 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:35.104566   28539 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:35.104641   28539 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:35.104809   28539 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:35.104915   28539 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:35.105051   28539 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:35.190219   28539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:35.202939   28539 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 7 (630.186844ms)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:06:37.868657   28623 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:06:37.868785   28623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:37.868793   28623 out.go:309] Setting ErrFile to fd 2...
	I0115 03:06:37.868798   28623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:37.869000   28623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:06:37.869154   28623 out.go:303] Setting JSON to false
	I0115 03:06:37.869187   28623 mustload.go:65] Loading cluster: ha-680410
	I0115 03:06:37.869297   28623 notify.go:220] Checking for updates...
	I0115 03:06:37.869743   28623 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:06:37.869762   28623 status.go:255] checking status of ha-680410 ...
	I0115 03:06:37.870194   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:37.870243   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:37.890203   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43349
	I0115 03:06:37.890599   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:37.891108   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:37.891130   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:37.891478   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:37.891667   28623 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:06:37.893181   28623 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:06:37.893198   28623 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:37.893446   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:37.893479   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:37.908021   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0115 03:06:37.908444   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:37.908889   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:37.908906   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:37.909253   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:37.909412   28623 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:06:37.912158   28623 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:37.912534   28623 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:37.912561   28623 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:37.912725   28623 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:37.912994   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:37.913025   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:37.926273   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I0115 03:06:37.926586   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:37.926996   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:37.927016   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:37.927297   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:37.927488   28623 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:06:37.927673   28623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:37.927697   28623 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:06:37.929905   28623 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:37.930268   28623 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:37.930292   28623 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:37.930419   28623 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:06:37.930585   28623 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:06:37.930744   28623 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:06:37.930891   28623 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:06:38.018330   28623 ssh_runner.go:195] Run: systemctl --version
	I0115 03:06:38.024150   28623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:38.037167   28623 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:38.037190   28623 api_server.go:166] Checking apiserver status ...
	I0115 03:06:38.037214   28623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:38.049131   28623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:06:38.057182   28623 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:06:38.057239   28623 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:06:38.065300   28623 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:38.065318   28623 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:38.070307   28623 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:38.070323   28623 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:06:38.070331   28623 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:38.070348   28623 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:06:38.070718   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:38.070756   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:38.084734   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I0115 03:06:38.085124   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:38.085557   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:38.085579   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:38.085879   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:38.086052   28623 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:06:38.087584   28623 status.go:330] ha-680410-m02 host status = "Stopped" (err=<nil>)
	I0115 03:06:38.087595   28623 status.go:343] host is not running, skipping remaining checks
	I0115 03:06:38.087600   28623 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:38.087618   28623 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:38.087884   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:38.087916   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:38.102701   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46223
	I0115 03:06:38.103091   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:38.103653   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:38.103685   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:38.104023   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:38.104204   28623 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:38.105673   28623 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:38.105688   28623 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:38.106053   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:38.106100   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:38.120292   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0115 03:06:38.120678   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:38.121089   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:38.121106   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:38.121408   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:38.121597   28623 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:38.124158   28623 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:38.124564   28623 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:38.124598   28623 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:38.124715   28623 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:38.125009   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:38.125051   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:38.138419   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0115 03:06:38.138835   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:38.139243   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:38.139265   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:38.139571   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:38.139777   28623 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:38.139934   28623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:38.139952   28623 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:38.142414   28623 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:38.142790   28623 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:38.142816   28623 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:38.142924   28623 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:38.143069   28623 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:38.143212   28623 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:38.143321   28623 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:38.234739   28623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:38.249007   28623 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:38.249031   28623 api_server.go:166] Checking apiserver status ...
	I0115 03:06:38.249077   28623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:38.261211   28623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:38.270796   28623 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:38.270834   28623 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:38.280664   28623 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:38.280683   28623 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:38.285747   28623 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:38.285768   28623 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:38.285778   28623 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:38.285801   28623 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:38.286071   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:38.286100   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:38.300027   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38119
	I0115 03:06:38.300406   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:38.300861   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:38.300888   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:38.301237   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:38.301432   28623 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:38.302859   28623 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:38.302873   28623 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:38.303130   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:38.303159   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:38.316776   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33207
	I0115 03:06:38.317056   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:38.317471   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:38.317494   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:38.317765   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:38.317930   28623 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:38.320602   28623 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:38.321025   28623 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:38.321055   28623 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:38.321191   28623 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:38.321531   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:38.321574   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:38.334715   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0115 03:06:38.335077   28623 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:38.335516   28623 main.go:141] libmachine: Using API Version  1
	I0115 03:06:38.335534   28623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:38.335807   28623 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:38.335982   28623 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:38.336157   28623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:38.336174   28623 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:38.338453   28623 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:38.338787   28623 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:38.338816   28623 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:38.338970   28623 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:38.339146   28623 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:38.339282   28623 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:38.339425   28623 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:38.430196   28623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:38.441873   28623 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 7 (653.902616ms)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:06:45.088109   28705 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:06:45.088271   28705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:45.088282   28705 out.go:309] Setting ErrFile to fd 2...
	I0115 03:06:45.088288   28705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:45.088551   28705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:06:45.088787   28705 out.go:303] Setting JSON to false
	I0115 03:06:45.088833   28705 mustload.go:65] Loading cluster: ha-680410
	I0115 03:06:45.088921   28705 notify.go:220] Checking for updates...
	I0115 03:06:45.089277   28705 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:06:45.089293   28705 status.go:255] checking status of ha-680410 ...
	I0115 03:06:45.089770   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.089851   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.104075   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0115 03:06:45.104445   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.104983   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.105001   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.105356   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.105551   28705 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:06:45.107046   28705 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:06:45.107072   28705 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:45.107372   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.107425   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.121425   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
	I0115 03:06:45.121751   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.122137   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.122159   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.122463   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.122642   28705 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:06:45.125156   28705 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:45.125585   28705 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:45.125616   28705 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:45.125771   28705 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:45.126061   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.126104   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.139817   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I0115 03:06:45.140164   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.140597   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.140618   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.140921   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.141112   28705 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:06:45.141314   28705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:45.141340   28705 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:06:45.144005   28705 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:45.144441   28705 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:45.144468   28705 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:45.144618   28705 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:06:45.144806   28705 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:06:45.144947   28705 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:06:45.145081   28705 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:06:45.235215   28705 ssh_runner.go:195] Run: systemctl --version
	I0115 03:06:45.241967   28705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:45.255677   28705 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:45.255701   28705 api_server.go:166] Checking apiserver status ...
	I0115 03:06:45.255733   28705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:45.268275   28705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:06:45.278897   28705 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:06:45.278961   28705 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:06:45.289897   28705 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:45.289920   28705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:45.294922   28705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:45.294952   28705 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:06:45.294964   28705 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:45.294982   28705 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:06:45.295265   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.295296   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.309184   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33167
	I0115 03:06:45.309559   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.310041   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.310062   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.310364   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.310582   28705 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:06:45.312133   28705 status.go:330] ha-680410-m02 host status = "Stopped" (err=<nil>)
	I0115 03:06:45.312145   28705 status.go:343] host is not running, skipping remaining checks
	I0115 03:06:45.312149   28705 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:45.312168   28705 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:45.312520   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.312565   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.325850   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0115 03:06:45.326233   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.326610   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.326625   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.326902   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.327086   28705 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:45.328486   28705 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:45.328502   28705 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:45.328775   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.328806   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.342457   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I0115 03:06:45.342814   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.343195   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.343219   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.343507   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.343651   28705 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:45.345990   28705 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:45.346402   28705 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:45.346429   28705 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:45.346562   28705 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:45.346885   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.346921   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.360152   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0115 03:06:45.360482   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.360906   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.360928   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.361192   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.361374   28705 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:45.361538   28705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:45.361560   28705 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:45.364330   28705 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:45.364707   28705 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:45.364750   28705 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:45.364866   28705 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:45.365042   28705 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:45.365204   28705 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:45.365342   28705 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:45.463873   28705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:45.488616   28705 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:45.488647   28705 api_server.go:166] Checking apiserver status ...
	I0115 03:06:45.488686   28705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:45.499889   28705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:45.509036   28705 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:45.509100   28705 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:45.518399   28705 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:45.518421   28705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:45.523722   28705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:45.523743   28705 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:45.523753   28705 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:45.523773   28705 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:45.524137   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.524182   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.538615   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42183
	I0115 03:06:45.539043   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.539506   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.539524   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.539807   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.539989   28705 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:45.541613   28705 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:45.541630   28705 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:45.541902   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.541933   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.555529   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0115 03:06:45.555842   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.556284   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.556305   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.556586   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.556772   28705 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:45.559317   28705 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:45.559736   28705 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:45.559764   28705 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:45.559910   28705 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:45.560210   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:45.560241   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:45.573817   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0115 03:06:45.574158   28705 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:45.574579   28705 main.go:141] libmachine: Using API Version  1
	I0115 03:06:45.574605   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:45.574961   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:45.575117   28705 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:45.575311   28705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:45.575335   28705 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:45.577819   28705 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:45.578268   28705 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:45.578302   28705 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:45.578436   28705 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:45.578597   28705 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:45.578738   28705 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:45.578870   28705 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:45.668057   28705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:45.685467   28705 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 7 (635.258932ms)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:06:49.954009   28789 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:06:49.954188   28789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:49.954203   28789 out.go:309] Setting ErrFile to fd 2...
	I0115 03:06:49.954210   28789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:49.954418   28789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:06:49.954617   28789 out.go:303] Setting JSON to false
	I0115 03:06:49.954659   28789 mustload.go:65] Loading cluster: ha-680410
	I0115 03:06:49.954752   28789 notify.go:220] Checking for updates...
	I0115 03:06:49.955069   28789 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:06:49.955085   28789 status.go:255] checking status of ha-680410 ...
	I0115 03:06:49.955577   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:49.955650   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:49.970740   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0115 03:06:49.971247   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:49.971781   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:49.971825   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:49.972131   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:49.972321   28789 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:06:49.973863   28789 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:06:49.973879   28789 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:49.974153   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:49.974191   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:49.987883   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0115 03:06:49.988199   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:49.988655   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:49.988677   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:49.988981   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:49.989153   28789 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:06:49.991718   28789 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:49.992159   28789 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:49.992199   28789 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:49.992271   28789 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:49.992665   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:49.992706   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:50.006153   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0115 03:06:50.006538   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:50.006943   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:50.006963   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:50.007270   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:50.007511   28789 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:06:50.007684   28789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:50.007708   28789 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:06:50.010315   28789 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:50.010826   28789 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:50.010861   28789 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:50.010967   28789 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:06:50.011126   28789 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:06:50.011284   28789 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:06:50.011436   28789 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:06:50.099524   28789 ssh_runner.go:195] Run: systemctl --version
	I0115 03:06:50.106553   28789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:50.121572   28789 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:50.121599   28789 api_server.go:166] Checking apiserver status ...
	I0115 03:06:50.121638   28789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:50.134964   28789 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:06:50.145162   28789 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:06:50.145228   28789 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:06:50.154544   28789 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:50.154573   28789 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:50.162717   28789 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:50.162742   28789 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:06:50.162754   28789 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:50.162776   28789 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:06:50.163171   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:50.163230   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:50.178870   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0115 03:06:50.179286   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:50.179772   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:50.179796   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:50.180148   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:50.180340   28789 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:06:50.181870   28789 status.go:330] ha-680410-m02 host status = "Stopped" (err=<nil>)
	I0115 03:06:50.181880   28789 status.go:343] host is not running, skipping remaining checks
	I0115 03:06:50.181885   28789 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:50.181898   28789 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:50.182192   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:50.182236   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:50.195647   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0115 03:06:50.195979   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:50.196399   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:50.196418   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:50.196737   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:50.196922   28789 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:50.198380   28789 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:50.198393   28789 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:50.198681   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:50.198718   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:50.212026   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I0115 03:06:50.212373   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:50.212793   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:50.212816   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:50.213058   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:50.213216   28789 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:50.215839   28789 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:50.216240   28789 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:50.216269   28789 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:50.216405   28789 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:50.216797   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:50.216841   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:50.230440   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I0115 03:06:50.230773   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:50.231205   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:50.231217   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:50.231536   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:50.231704   28789 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:50.231873   28789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:50.231893   28789 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:50.234389   28789 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:50.234806   28789 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:50.234829   28789 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:50.234946   28789 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:50.235099   28789 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:50.235234   28789 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:50.235348   28789 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:50.327491   28789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:50.341754   28789 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:50.341781   28789 api_server.go:166] Checking apiserver status ...
	I0115 03:06:50.341817   28789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:50.353540   28789 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:50.362423   28789 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:50.362492   28789 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:50.371561   28789 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:50.371597   28789 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:50.376642   28789 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:50.376661   28789 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:50.376669   28789 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:50.376681   28789 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:50.377104   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:50.377151   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:50.391020   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I0115 03:06:50.391382   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:50.391805   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:50.391827   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:50.392120   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:50.392273   28789 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:50.393783   28789 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:50.393799   28789 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:50.394117   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:50.394154   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:50.407599   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I0115 03:06:50.407943   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:50.408298   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:50.408320   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:50.408605   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:50.408767   28789 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:50.411453   28789 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:50.411871   28789 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:50.411902   28789 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:50.412043   28789 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:50.412450   28789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:50.412498   28789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:50.427789   28789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0115 03:06:50.428121   28789 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:50.428527   28789 main.go:141] libmachine: Using API Version  1
	I0115 03:06:50.428545   28789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:50.428843   28789 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:50.429025   28789 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:50.429215   28789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:50.429236   28789 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:50.431941   28789 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:50.432406   28789 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:50.432426   28789 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:50.432548   28789 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:50.432713   28789 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:50.432870   28789 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:50.433004   28789 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:50.519029   28789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:50.531449   28789 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 7 (642.181763ms)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:06:58.064968   28886 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:06:58.065214   28886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:58.065223   28886 out.go:309] Setting ErrFile to fd 2...
	I0115 03:06:58.065230   28886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:06:58.065415   28886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:06:58.065604   28886 out.go:303] Setting JSON to false
	I0115 03:06:58.065652   28886 mustload.go:65] Loading cluster: ha-680410
	I0115 03:06:58.065752   28886 notify.go:220] Checking for updates...
	I0115 03:06:58.066054   28886 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:06:58.066069   28886 status.go:255] checking status of ha-680410 ...
	I0115 03:06:58.066603   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.066672   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.080488   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I0115 03:06:58.080863   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.081349   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.081364   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.081643   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.081820   28886 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:06:58.083178   28886 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:06:58.083194   28886 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:58.083508   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.083549   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.097375   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0115 03:06:58.097764   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.098295   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.098325   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.098679   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.098876   28886 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:06:58.101943   28886 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:58.102390   28886 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:58.102419   28886 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:58.102550   28886 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:06:58.102821   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.102852   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.116220   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34379
	I0115 03:06:58.116568   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.116947   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.116977   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.117241   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.117395   28886 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:06:58.117578   28886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:58.117607   28886 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:06:58.120179   28886 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:58.120616   28886 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:06:58.120655   28886 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:06:58.120772   28886 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:06:58.120934   28886 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:06:58.121083   28886 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:06:58.121226   28886 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:06:58.210828   28886 ssh_runner.go:195] Run: systemctl --version
	I0115 03:06:58.216350   28886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:58.230006   28886 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:58.230027   28886 api_server.go:166] Checking apiserver status ...
	I0115 03:06:58.230055   28886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:58.242939   28886 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:06:58.251688   28886 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:06:58.251751   28886 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:06:58.260770   28886 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:58.260807   28886 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:58.266058   28886 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:58.266078   28886 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:06:58.266086   28886 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:58.266099   28886 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:06:58.266371   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.266404   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.281083   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I0115 03:06:58.281457   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.281957   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.281991   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.282285   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.282454   28886 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:06:58.283911   28886 status.go:330] ha-680410-m02 host status = "Stopped" (err=<nil>)
	I0115 03:06:58.283923   28886 status.go:343] host is not running, skipping remaining checks
	I0115 03:06:58.283927   28886 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:58.283945   28886 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:06:58.284283   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.284325   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.297591   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0115 03:06:58.297949   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.298425   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.298448   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.298750   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.298905   28886 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:06:58.300517   28886 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:06:58.300535   28886 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:58.300896   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.300929   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.314037   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32815
	I0115 03:06:58.314409   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.314798   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.314823   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.315091   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.315289   28886 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:06:58.317710   28886 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:58.318107   28886 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:58.318128   28886 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:58.318277   28886 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:06:58.318539   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.318568   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.332517   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0115 03:06:58.332829   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.333290   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.333304   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.333651   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.333833   28886 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:06:58.334006   28886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:58.334031   28886 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:06:58.336645   28886 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:58.336984   28886 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:06:58.337005   28886 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:06:58.337165   28886 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:06:58.337338   28886 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:06:58.337498   28886 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:06:58.337603   28886 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:06:58.432173   28886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:58.446029   28886 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:06:58.446058   28886 api_server.go:166] Checking apiserver status ...
	I0115 03:06:58.446094   28886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:06:58.459632   28886 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:06:58.468483   28886 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:06:58.468527   28886 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:06:58.484921   28886 api_server.go:204] freezer state: "THAWED"
	I0115 03:06:58.484941   28886 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:06:58.489760   28886 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:06:58.489785   28886 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:06:58.489792   28886 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:06:58.489804   28886 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:06:58.490093   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.490128   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.504609   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0115 03:06:58.504975   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.505379   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.505397   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.505689   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.505838   28886 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:06:58.507396   28886 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:06:58.507412   28886 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:58.507716   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.507774   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.522784   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0115 03:06:58.523112   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.523575   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.523598   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.523872   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.524059   28886 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:06:58.526592   28886 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:58.527005   28886 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:58.527031   28886 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:58.527198   28886 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:06:58.527589   28886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:06:58.527624   28886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:06:58.541857   28886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36681
	I0115 03:06:58.542247   28886 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:06:58.542690   28886 main.go:141] libmachine: Using API Version  1
	I0115 03:06:58.542718   28886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:06:58.542992   28886 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:06:58.543179   28886 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:06:58.543364   28886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:06:58.543396   28886 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:06:58.545860   28886 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:58.546273   28886 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:06:58.546292   28886 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:06:58.546430   28886 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:06:58.546594   28886 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:06:58.546727   28886 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:06:58.546865   28886 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:06:58.634918   28886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:06:58.648780   28886 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 7 (638.038713ms)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-680410-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:07:14.550708   28990 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:07:14.550965   28990 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:07:14.550975   28990 out.go:309] Setting ErrFile to fd 2...
	I0115 03:07:14.550979   28990 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:07:14.551197   28990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:07:14.551467   28990 out.go:303] Setting JSON to false
	I0115 03:07:14.551526   28990 mustload.go:65] Loading cluster: ha-680410
	I0115 03:07:14.551615   28990 notify.go:220] Checking for updates...
	I0115 03:07:14.551959   28990 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:07:14.551973   28990 status.go:255] checking status of ha-680410 ...
	I0115 03:07:14.552417   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.552537   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:14.567054   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0115 03:07:14.567459   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:14.568055   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:14.568076   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:14.568474   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:14.568691   28990 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:07:14.570278   28990 status.go:330] ha-680410 host status = "Running" (err=<nil>)
	I0115 03:07:14.570295   28990 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:07:14.570598   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.570632   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:14.584743   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0115 03:07:14.585091   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:14.585473   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:14.585497   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:14.585777   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:14.585954   28990 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 03:07:14.588415   28990 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:07:14.588805   28990 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:07:14.588831   28990 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:07:14.588957   28990 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:07:14.589266   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.589300   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:14.602514   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33293
	I0115 03:07:14.602835   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:14.603214   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:14.603235   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:14.603518   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:14.603710   28990 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:07:14.603884   28990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:07:14.603902   28990 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:07:14.606556   28990 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:07:14.606959   28990 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:07:14.606989   28990 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:07:14.607164   28990 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:07:14.607328   28990 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:07:14.607507   28990 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:07:14.607626   28990 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:07:14.695932   28990 ssh_runner.go:195] Run: systemctl --version
	I0115 03:07:14.702652   28990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:07:14.718267   28990 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:07:14.718291   28990 api_server.go:166] Checking apiserver status ...
	I0115 03:07:14.718328   28990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:07:14.730946   28990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I0115 03:07:14.740333   28990 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a"
	I0115 03:07:14.740381   28990 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbda04cd13a5eeea4fa985e962af2b334/7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a/freezer.state
	I0115 03:07:14.750138   28990 api_server.go:204] freezer state: "THAWED"
	I0115 03:07:14.750162   28990 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:07:14.755162   28990 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:07:14.755183   28990 status.go:424] ha-680410 apiserver status = Running (err=<nil>)
	I0115 03:07:14.755195   28990 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:07:14.755226   28990 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:07:14.755558   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.755599   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:14.769519   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38657
	I0115 03:07:14.769848   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:14.770253   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:14.770272   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:14.770582   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:14.770797   28990 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:07:14.772154   28990 status.go:330] ha-680410-m02 host status = "Stopped" (err=<nil>)
	I0115 03:07:14.772165   28990 status.go:343] host is not running, skipping remaining checks
	I0115 03:07:14.772170   28990 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:07:14.772184   28990 status.go:255] checking status of ha-680410-m03 ...
	I0115 03:07:14.772444   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.772474   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:14.785826   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I0115 03:07:14.786175   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:14.786566   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:14.786581   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:14.786904   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:14.787070   28990 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:07:14.788374   28990 status.go:330] ha-680410-m03 host status = "Running" (err=<nil>)
	I0115 03:07:14.788387   28990 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:07:14.788672   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.788710   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:14.801704   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36681
	I0115 03:07:14.802049   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:14.802471   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:14.802496   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:14.802875   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:14.803046   28990 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:07:14.805670   28990 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:07:14.806049   28990 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:07:14.806083   28990 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:07:14.806193   28990 host.go:66] Checking if "ha-680410-m03" exists ...
	I0115 03:07:14.806472   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.806503   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:14.820070   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35863
	I0115 03:07:14.820405   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:14.820797   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:14.820814   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:14.821124   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:14.821303   28990 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:07:14.821483   28990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:07:14.821505   28990 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:07:14.823834   28990 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:07:14.824168   28990 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:07:14.824205   28990 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:07:14.824340   28990 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:07:14.824502   28990 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:07:14.824639   28990 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:07:14.824737   28990 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:07:14.919623   28990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:07:14.933798   28990 kubeconfig.go:125] found "ha-680410" server: "https://192.168.39.254:8443"
	I0115 03:07:14.933820   28990 api_server.go:166] Checking apiserver status ...
	I0115 03:07:14.933852   28990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:07:14.948015   28990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0115 03:07:14.956504   28990 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22"
	I0115 03:07:14.956561   28990 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod7cc0165950d58474933e6dbc0fdefac6/70899a411eea3523984bf1242c5ccd6ad068bb9cf5077573eb59585c7e79ca22/freezer.state
	I0115 03:07:14.965572   28990 api_server.go:204] freezer state: "THAWED"
	I0115 03:07:14.965592   28990 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0115 03:07:14.973316   28990 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0115 03:07:14.973341   28990 status.go:424] ha-680410-m03 apiserver status = Running (err=<nil>)
	I0115 03:07:14.973352   28990 status.go:257] ha-680410-m03 status: &{Name:ha-680410-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:07:14.973379   28990 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:07:14.973758   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.973803   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:14.987957   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0115 03:07:14.988307   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:14.988712   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:14.988731   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:14.989060   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:14.989234   28990 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:07:14.990760   28990 status.go:330] ha-680410-m04 host status = "Running" (err=<nil>)
	I0115 03:07:14.990778   28990 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:07:14.991106   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:14.991169   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:15.005986   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0115 03:07:15.006311   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:15.006781   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:15.006809   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:15.007144   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:15.007335   28990 main.go:141] libmachine: (ha-680410-m04) Calling .GetIP
	I0115 03:07:15.010200   28990 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:07:15.010665   28990 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:07:15.010699   28990 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:07:15.010796   28990 host.go:66] Checking if "ha-680410-m04" exists ...
	I0115 03:07:15.011199   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:07:15.011240   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:07:15.024761   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I0115 03:07:15.025108   28990 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:07:15.025605   28990 main.go:141] libmachine: Using API Version  1
	I0115 03:07:15.025632   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:07:15.025958   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:07:15.026102   28990 main.go:141] libmachine: (ha-680410-m04) Calling .DriverName
	I0115 03:07:15.026296   28990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:07:15.026322   28990 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHHostname
	I0115 03:07:15.028984   28990 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:07:15.029362   28990 main.go:141] libmachine: (ha-680410-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:e5:a3", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:04:07 +0000 UTC Type:0 Mac:52:54:00:b2:e5:a3 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-680410-m04 Clientid:01:52:54:00:b2:e5:a3}
	I0115 03:07:15.029392   28990 main.go:141] libmachine: (ha-680410-m04) DBG | domain ha-680410-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:b2:e5:a3 in network mk-ha-680410
	I0115 03:07:15.029579   28990 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHPort
	I0115 03:07:15.029727   28990 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHKeyPath
	I0115 03:07:15.029882   28990 main.go:141] libmachine: (ha-680410-m04) Calling .GetSSHUsername
	I0115 03:07:15.030017   28990 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m04/id_rsa Username:docker}
	I0115 03:07:15.118372   28990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:07:15.132232   28990 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-680410 -n ha-680410
helpers_test.go:244: <<< TestHA/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestHA/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-680410 logs -n 25: (1.454392366s)
helpers_test.go:252: TestHA/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                Args                                |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410:/home/docker/cp-test_ha-680410-m03_ha-680410.txt         |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410 sudo cat                                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m03_ha-680410.txt                   |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m02:/home/docker/cp-test_ha-680410-m03_ha-680410-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410-m02 sudo cat                            | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m03_ha-680410-m02.txt               |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04:/home/docker/cp-test_ha-680410-m03_ha-680410-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410-m04 sudo cat                            | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m03_ha-680410-m04.txt               |           |         |         |                     |                     |
	| cp      | ha-680410 cp testdata/cp-test.txt                                  | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04:/home/docker/cp-test.txt                             |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /tmp/TestHAserialCopyFile2725737547/001/cp-test_ha-680410-m04.txt  |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410:/home/docker/cp-test_ha-680410-m04_ha-680410.txt         |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410 sudo cat                                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m04_ha-680410.txt                   |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m02:/home/docker/cp-test_ha-680410-m04_ha-680410-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410-m02 sudo cat                            | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m04_ha-680410-m02.txt               |           |         |         |                     |                     |
	| cp      | ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt                | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m03:/home/docker/cp-test_ha-680410-m04_ha-680410-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n                                                   | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | ha-680410-m04 sudo cat                                             |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-680410 ssh -n ha-680410-m03 sudo cat                            | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:04 UTC |
	|         | /home/docker/cp-test_ha-680410-m04_ha-680410-m03.txt               |           |         |         |                     |                     |
	| node    | ha-680410 node stop m02 -v=7                                       | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:04 UTC | 15 Jan 24 03:05 UTC |
	|         | --alsologtostderr                                                  |           |         |         |                     |                     |
	| node    | ha-680410 node start m02 -v=7                                      | ha-680410 | jenkins | v1.32.0 | 15 Jan 24 03:06 UTC |                     |
	|         | --alsologtostderr                                                  |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 02:58:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 02:58:27.903728   23809 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:58:27.903853   23809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:58:27.903862   23809 out.go:309] Setting ErrFile to fd 2...
	I0115 02:58:27.903866   23809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:58:27.904065   23809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 02:58:27.904637   23809 out.go:303] Setting JSON to false
	I0115 02:58:27.905465   23809 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2453,"bootTime":1705285055,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 02:58:27.905538   23809 start.go:138] virtualization: kvm guest
	I0115 02:58:27.907797   23809 out.go:177] * [ha-680410] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 02:58:27.909278   23809 out.go:177]   - MINIKUBE_LOCATION=17909
	I0115 02:58:27.910743   23809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 02:58:27.909269   23809 notify.go:220] Checking for updates...
	I0115 02:58:27.913534   23809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:58:27.914911   23809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:58:27.916245   23809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 02:58:27.917510   23809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 02:58:27.918788   23809 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 02:58:27.950836   23809 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 02:58:27.952083   23809 start.go:296] selected driver: kvm2
	I0115 02:58:27.952097   23809 start.go:900] validating driver "kvm2" against <nil>
	I0115 02:58:27.952118   23809 start.go:911] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 02:58:27.953037   23809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:58:27.953145   23809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17909-7685/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 02:58:27.965710   23809 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 02:58:27.965761   23809 start_flags.go:308] no existing cluster config was found, will generate one from the flags 
	I0115 02:58:27.965944   23809 start_flags.go:943] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 02:58:27.965996   23809 cni.go:84] Creating CNI manager for ""
	I0115 02:58:27.966009   23809 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0115 02:58:27.966017   23809 start_flags.go:317] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 02:58:27.966064   23809 start.go:339] cluster config:
	{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs:}
	I0115 02:58:27.966152   23809 iso.go:125] acquiring lock: {Name:mk557eda9a6ce643c635b77cd4c9cb212ca64fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:58:27.968579   23809 out.go:177] * Starting "ha-680410" primary control-plane node in "ha-680410" cluster
	I0115 02:58:27.970736   23809 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:58:27.970759   23809 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0115 02:58:27.970765   23809 cache.go:56] Caching tarball of preloaded images
	I0115 02:58:27.970839   23809 preload.go:173] Found /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 02:58:27.970852   23809 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0115 02:58:27.971145   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:58:27.971165   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json: {Name:mk893384b7b0ad5aa2d7ef4824af052fc6525c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:58:27.971296   23809 start.go:360] acquireMachinesLock for ha-680410: {Name:mk08ca2fbfa7e17b9b93de9f109025291dd8cd1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 02:58:27.971328   23809 start.go:364] duration metric: took 16.89µs to acquireMachinesLock for "ha-680410"
	I0115 02:58:27.971349   23809 start.go:93] Provisioning new machine with config: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 02:58:27.971418   23809 start.go:125] createHost starting for "" (driver="kvm2")
	I0115 02:58:27.973140   23809 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0115 02:58:27.973245   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:58:27.973275   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:58:27.985342   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I0115 02:58:27.985713   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:58:27.986229   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:58:27.986247   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:58:27.986535   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:58:27.986685   23809 main.go:141] libmachine: (ha-680410) Calling .GetMachineName
	I0115 02:58:27.986805   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:27.986930   23809 start.go:159] libmachine.API.Create for "ha-680410" (driver="kvm2")
	I0115 02:58:27.986961   23809 client.go:168] LocalClient.Create starting
	I0115 02:58:27.986986   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem
	I0115 02:58:27.987010   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 02:58:27.987024   23809 main.go:141] libmachine: Parsing certificate...
	I0115 02:58:27.987066   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem
	I0115 02:58:27.987083   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 02:58:27.987097   23809 main.go:141] libmachine: Parsing certificate...
	I0115 02:58:27.987114   23809 main.go:141] libmachine: Running pre-create checks...
	I0115 02:58:27.987122   23809 main.go:141] libmachine: (ha-680410) Calling .PreCreateCheck
	I0115 02:58:27.987453   23809 main.go:141] libmachine: (ha-680410) Calling .GetConfigRaw
	I0115 02:58:27.987786   23809 main.go:141] libmachine: Creating machine...
	I0115 02:58:27.987800   23809 main.go:141] libmachine: (ha-680410) Calling .Create
	I0115 02:58:27.987899   23809 main.go:141] libmachine: (ha-680410) Creating KVM machine...
	I0115 02:58:27.989007   23809 main.go:141] libmachine: (ha-680410) DBG | found existing default KVM network
	I0115 02:58:27.989682   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:27.989531   23832 network.go:208] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a20}
	I0115 02:58:27.989719   23809 main.go:141] libmachine: (ha-680410) DBG | created network xml: 
	I0115 02:58:27.989745   23809 main.go:141] libmachine: (ha-680410) DBG | <network>
	I0115 02:58:27.989771   23809 main.go:141] libmachine: (ha-680410) DBG |   <name>mk-ha-680410</name>
	I0115 02:58:27.989797   23809 main.go:141] libmachine: (ha-680410) DBG |   <dns enable='no'/>
	I0115 02:58:27.989825   23809 main.go:141] libmachine: (ha-680410) DBG |   
	I0115 02:58:27.989834   23809 main.go:141] libmachine: (ha-680410) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0115 02:58:27.989840   23809 main.go:141] libmachine: (ha-680410) DBG |     <dhcp>
	I0115 02:58:27.989848   23809 main.go:141] libmachine: (ha-680410) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0115 02:58:27.989866   23809 main.go:141] libmachine: (ha-680410) DBG |     </dhcp>
	I0115 02:58:27.989890   23809 main.go:141] libmachine: (ha-680410) DBG |   </ip>
	I0115 02:58:27.989904   23809 main.go:141] libmachine: (ha-680410) DBG |   
	I0115 02:58:27.989914   23809 main.go:141] libmachine: (ha-680410) DBG | </network>
	I0115 02:58:27.989921   23809 main.go:141] libmachine: (ha-680410) DBG | 
	I0115 02:58:27.994312   23809 main.go:141] libmachine: (ha-680410) DBG | trying to create private KVM network mk-ha-680410 192.168.39.0/24...
	I0115 02:58:28.057701   23809 main.go:141] libmachine: (ha-680410) Setting up store path in /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410 ...
	I0115 02:58:28.057743   23809 main.go:141] libmachine: (ha-680410) Building disk image from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 02:58:28.057781   23809 main.go:141] libmachine: (ha-680410) DBG | private KVM network mk-ha-680410 192.168.39.0/24 created
	I0115 02:58:28.057813   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:28.057611   23832 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:58:28.057871   23809 main.go:141] libmachine: (ha-680410) Downloading /home/jenkins/minikube-integration/17909-7685/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 02:58:28.263960   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:28.263848   23832 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa...
	I0115 02:58:28.419978   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:28.419883   23832 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/ha-680410.rawdisk...
	I0115 02:58:28.420003   23809 main.go:141] libmachine: (ha-680410) DBG | Writing magic tar header
	I0115 02:58:28.420013   23809 main.go:141] libmachine: (ha-680410) DBG | Writing SSH key tar header
	I0115 02:58:28.420021   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:28.419992   23832 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410 ...
	I0115 02:58:28.420134   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410 (perms=drwx------)
	I0115 02:58:28.420154   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines (perms=drwxr-xr-x)
	I0115 02:58:28.420162   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410
	I0115 02:58:28.420172   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines
	I0115 02:58:28.420180   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:58:28.420205   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685
	I0115 02:58:28.420217   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 02:58:28.420232   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube (perms=drwxr-xr-x)
	I0115 02:58:28.420239   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home/jenkins
	I0115 02:58:28.420265   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685 (perms=drwxrwxr-x)
	I0115 02:58:28.420287   23809 main.go:141] libmachine: (ha-680410) DBG | Checking permissions on dir: /home
	I0115 02:58:28.420314   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 02:58:28.420330   23809 main.go:141] libmachine: (ha-680410) DBG | Skipping /home - not owner
	I0115 02:58:28.420343   23809 main.go:141] libmachine: (ha-680410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 02:58:28.420361   23809 main.go:141] libmachine: (ha-680410) Creating domain...
	I0115 02:58:28.421252   23809 main.go:141] libmachine: (ha-680410) define libvirt domain using xml: 
	I0115 02:58:28.421276   23809 main.go:141] libmachine: (ha-680410) <domain type='kvm'>
	I0115 02:58:28.421295   23809 main.go:141] libmachine: (ha-680410)   <name>ha-680410</name>
	I0115 02:58:28.421312   23809 main.go:141] libmachine: (ha-680410)   <memory unit='MiB'>2200</memory>
	I0115 02:58:28.421326   23809 main.go:141] libmachine: (ha-680410)   <vcpu>2</vcpu>
	I0115 02:58:28.421342   23809 main.go:141] libmachine: (ha-680410)   <features>
	I0115 02:58:28.421351   23809 main.go:141] libmachine: (ha-680410)     <acpi/>
	I0115 02:58:28.421358   23809 main.go:141] libmachine: (ha-680410)     <apic/>
	I0115 02:58:28.421365   23809 main.go:141] libmachine: (ha-680410)     <pae/>
	I0115 02:58:28.421371   23809 main.go:141] libmachine: (ha-680410)     
	I0115 02:58:28.421377   23809 main.go:141] libmachine: (ha-680410)   </features>
	I0115 02:58:28.421392   23809 main.go:141] libmachine: (ha-680410)   <cpu mode='host-passthrough'>
	I0115 02:58:28.421399   23809 main.go:141] libmachine: (ha-680410)   
	I0115 02:58:28.421404   23809 main.go:141] libmachine: (ha-680410)   </cpu>
	I0115 02:58:28.421410   23809 main.go:141] libmachine: (ha-680410)   <os>
	I0115 02:58:28.421415   23809 main.go:141] libmachine: (ha-680410)     <type>hvm</type>
	I0115 02:58:28.421421   23809 main.go:141] libmachine: (ha-680410)     <boot dev='cdrom'/>
	I0115 02:58:28.421429   23809 main.go:141] libmachine: (ha-680410)     <boot dev='hd'/>
	I0115 02:58:28.421436   23809 main.go:141] libmachine: (ha-680410)     <bootmenu enable='no'/>
	I0115 02:58:28.421443   23809 main.go:141] libmachine: (ha-680410)   </os>
	I0115 02:58:28.421448   23809 main.go:141] libmachine: (ha-680410)   <devices>
	I0115 02:58:28.421456   23809 main.go:141] libmachine: (ha-680410)     <disk type='file' device='cdrom'>
	I0115 02:58:28.421466   23809 main.go:141] libmachine: (ha-680410)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/boot2docker.iso'/>
	I0115 02:58:28.421475   23809 main.go:141] libmachine: (ha-680410)       <target dev='hdc' bus='scsi'/>
	I0115 02:58:28.421496   23809 main.go:141] libmachine: (ha-680410)       <readonly/>
	I0115 02:58:28.421514   23809 main.go:141] libmachine: (ha-680410)     </disk>
	I0115 02:58:28.421534   23809 main.go:141] libmachine: (ha-680410)     <disk type='file' device='disk'>
	I0115 02:58:28.421549   23809 main.go:141] libmachine: (ha-680410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 02:58:28.421569   23809 main.go:141] libmachine: (ha-680410)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/ha-680410.rawdisk'/>
	I0115 02:58:28.421583   23809 main.go:141] libmachine: (ha-680410)       <target dev='hda' bus='virtio'/>
	I0115 02:58:28.421595   23809 main.go:141] libmachine: (ha-680410)     </disk>
	I0115 02:58:28.421610   23809 main.go:141] libmachine: (ha-680410)     <interface type='network'>
	I0115 02:58:28.421623   23809 main.go:141] libmachine: (ha-680410)       <source network='mk-ha-680410'/>
	I0115 02:58:28.421637   23809 main.go:141] libmachine: (ha-680410)       <model type='virtio'/>
	I0115 02:58:28.421650   23809 main.go:141] libmachine: (ha-680410)     </interface>
	I0115 02:58:28.421664   23809 main.go:141] libmachine: (ha-680410)     <interface type='network'>
	I0115 02:58:28.421682   23809 main.go:141] libmachine: (ha-680410)       <source network='default'/>
	I0115 02:58:28.421699   23809 main.go:141] libmachine: (ha-680410)       <model type='virtio'/>
	I0115 02:58:28.421711   23809 main.go:141] libmachine: (ha-680410)     </interface>
	I0115 02:58:28.421722   23809 main.go:141] libmachine: (ha-680410)     <serial type='pty'>
	I0115 02:58:28.421733   23809 main.go:141] libmachine: (ha-680410)       <target port='0'/>
	I0115 02:58:28.421744   23809 main.go:141] libmachine: (ha-680410)     </serial>
	I0115 02:58:28.421761   23809 main.go:141] libmachine: (ha-680410)     <console type='pty'>
	I0115 02:58:28.421775   23809 main.go:141] libmachine: (ha-680410)       <target type='serial' port='0'/>
	I0115 02:58:28.421789   23809 main.go:141] libmachine: (ha-680410)     </console>
	I0115 02:58:28.421803   23809 main.go:141] libmachine: (ha-680410)     <rng model='virtio'>
	I0115 02:58:28.421825   23809 main.go:141] libmachine: (ha-680410)       <backend model='random'>/dev/random</backend>
	I0115 02:58:28.421838   23809 main.go:141] libmachine: (ha-680410)     </rng>
	I0115 02:58:28.421851   23809 main.go:141] libmachine: (ha-680410)     
	I0115 02:58:28.421862   23809 main.go:141] libmachine: (ha-680410)     
	I0115 02:58:28.421875   23809 main.go:141] libmachine: (ha-680410)   </devices>
	I0115 02:58:28.421886   23809 main.go:141] libmachine: (ha-680410) </domain>
	I0115 02:58:28.421901   23809 main.go:141] libmachine: (ha-680410) 
	I0115 02:58:28.425805   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:29:85:68 in network default
	I0115 02:58:28.426327   23809 main.go:141] libmachine: (ha-680410) Ensuring networks are active...
	I0115 02:58:28.426358   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:28.426841   23809 main.go:141] libmachine: (ha-680410) Ensuring network default is active
	I0115 02:58:28.427100   23809 main.go:141] libmachine: (ha-680410) Ensuring network mk-ha-680410 is active
	I0115 02:58:28.427590   23809 main.go:141] libmachine: (ha-680410) Getting domain xml...
	I0115 02:58:28.428162   23809 main.go:141] libmachine: (ha-680410) Creating domain...
	I0115 02:58:29.564499   23809 main.go:141] libmachine: (ha-680410) Waiting to get IP...
	I0115 02:58:29.565396   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:29.565783   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:29.565840   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:29.565782   23832 retry.go:31] will retry after 240.639484ms: waiting for machine to come up
	I0115 02:58:29.808229   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:29.808674   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:29.808722   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:29.808632   23832 retry.go:31] will retry after 383.501823ms: waiting for machine to come up
	I0115 02:58:30.195323   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:30.195727   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:30.195759   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:30.195676   23832 retry.go:31] will retry after 453.282979ms: waiting for machine to come up
	I0115 02:58:30.650179   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:30.650633   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:30.650661   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:30.650597   23832 retry.go:31] will retry after 509.075269ms: waiting for machine to come up
	I0115 02:58:31.161065   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:31.161443   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:31.161472   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:31.161401   23832 retry.go:31] will retry after 471.62185ms: waiting for machine to come up
	I0115 02:58:31.634969   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:31.635370   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:31.635417   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:31.635320   23832 retry.go:31] will retry after 647.582826ms: waiting for machine to come up
	I0115 02:58:32.283989   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:32.284354   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:32.284383   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:32.284302   23832 retry.go:31] will retry after 993.298534ms: waiting for machine to come up
	I0115 02:58:33.278728   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:33.279095   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:33.279123   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:33.279051   23832 retry.go:31] will retry after 1.081585318s: waiting for machine to come up
	I0115 02:58:34.362107   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:34.362505   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:34.362535   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:34.362466   23832 retry.go:31] will retry after 1.251610896s: waiting for machine to come up
	I0115 02:58:35.615925   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:35.616437   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:35.616469   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:35.616376   23832 retry.go:31] will retry after 1.802852546s: waiting for machine to come up
	I0115 02:58:37.420309   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:37.420833   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:37.420865   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:37.420783   23832 retry.go:31] will retry after 2.055276332s: waiting for machine to come up
	I0115 02:58:39.477437   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:39.477858   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:39.477886   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:39.477799   23832 retry.go:31] will retry after 3.431189295s: waiting for machine to come up
	I0115 02:58:42.913263   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:42.913755   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:42.913804   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:42.913729   23832 retry.go:31] will retry after 4.071377514s: waiting for machine to come up
	I0115 02:58:46.988351   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:46.988687   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find current IP address of domain ha-680410 in network mk-ha-680410
	I0115 02:58:46.988707   23809 main.go:141] libmachine: (ha-680410) DBG | I0115 02:58:46.988650   23832 retry.go:31] will retry after 4.734714935s: waiting for machine to come up
	I0115 02:58:51.727284   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.727690   23809 main.go:141] libmachine: (ha-680410) Found IP for machine: 192.168.39.194
	I0115 02:58:51.727720   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has current primary IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.727733   23809 main.go:141] libmachine: (ha-680410) Reserving static IP address...
	I0115 02:58:51.728095   23809 main.go:141] libmachine: (ha-680410) DBG | unable to find host DHCP lease matching {name: "ha-680410", mac: "52:54:00:f3:e1:70", ip: "192.168.39.194"} in network mk-ha-680410
	I0115 02:58:51.795648   23809 main.go:141] libmachine: (ha-680410) DBG | Getting to WaitForSSH function...
	I0115 02:58:51.795685   23809 main.go:141] libmachine: (ha-680410) Reserved static IP address: 192.168.39.194
	I0115 02:58:51.795700   23809 main.go:141] libmachine: (ha-680410) Waiting for SSH to be available...
	I0115 02:58:51.797888   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.798223   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:51.798244   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.798415   23809 main.go:141] libmachine: (ha-680410) DBG | Using SSH client type: external
	I0115 02:58:51.798440   23809 main.go:141] libmachine: (ha-680410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa (-rw-------)
	I0115 02:58:51.798509   23809 main.go:141] libmachine: (ha-680410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 02:58:51.798539   23809 main.go:141] libmachine: (ha-680410) DBG | About to run SSH command:
	I0115 02:58:51.798557   23809 main.go:141] libmachine: (ha-680410) DBG | exit 0
	I0115 02:58:51.886688   23809 main.go:141] libmachine: (ha-680410) DBG | SSH cmd err, output: <nil>: 
	I0115 02:58:51.886893   23809 main.go:141] libmachine: (ha-680410) KVM machine creation complete!
	I0115 02:58:51.887170   23809 main.go:141] libmachine: (ha-680410) Calling .GetConfigRaw
	I0115 02:58:51.887678   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:51.887860   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:51.888002   23809 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 02:58:51.888019   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:58:51.889224   23809 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 02:58:51.889244   23809 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 02:58:51.889277   23809 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 02:58:51.889294   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:51.891740   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.892129   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:51.892159   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:51.892267   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:51.892468   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:51.892624   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:51.892767   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:51.892944   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:51.893290   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:51.893304   23809 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 02:58:52.010186   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:58:52.010211   23809 main.go:141] libmachine: Detecting the provisioner...
	I0115 02:58:52.010231   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.012537   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.012875   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.012901   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.013018   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.013194   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.013340   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.013474   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.013599   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:52.013945   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:52.013959   23809 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 02:58:52.127568   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 02:58:52.127653   23809 main.go:141] libmachine: found compatible host: buildroot
	I0115 02:58:52.127669   23809 main.go:141] libmachine: Provisioning with buildroot...
	I0115 02:58:52.127683   23809 main.go:141] libmachine: (ha-680410) Calling .GetMachineName
	I0115 02:58:52.127940   23809 buildroot.go:166] provisioning hostname "ha-680410"
	I0115 02:58:52.127964   23809 main.go:141] libmachine: (ha-680410) Calling .GetMachineName
	I0115 02:58:52.128136   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.130729   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.131034   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.131056   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.131207   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.131373   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.131531   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.131679   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.131805   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:52.132120   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:52.132134   23809 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-680410 && echo "ha-680410" | sudo tee /etc/hostname
	I0115 02:58:52.258746   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-680410
	
	I0115 02:58:52.258786   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.261304   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.261689   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.261719   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.261859   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.262016   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.262172   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.262272   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.262456   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:52.262808   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:52.262828   23809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-680410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-680410/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-680410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 02:58:52.387103   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:58:52.387133   23809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17909-7685/.minikube CaCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17909-7685/.minikube}
	I0115 02:58:52.387167   23809 buildroot.go:174] setting up certificates
	I0115 02:58:52.387177   23809 provision.go:84] configureAuth start
	I0115 02:58:52.387186   23809 main.go:141] libmachine: (ha-680410) Calling .GetMachineName
	I0115 02:58:52.387439   23809 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 02:58:52.389861   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.390181   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.390212   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.390338   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.392342   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.392634   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.392662   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.392793   23809 provision.go:143] copyHostCerts
	I0115 02:58:52.392835   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 02:58:52.392875   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem, removing ...
	I0115 02:58:52.392895   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 02:58:52.392983   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem (1078 bytes)
	I0115 02:58:52.393068   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 02:58:52.393090   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem, removing ...
	I0115 02:58:52.393099   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 02:58:52.393133   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem (1123 bytes)
	I0115 02:58:52.393185   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 02:58:52.393206   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem, removing ...
	I0115 02:58:52.393216   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 02:58:52.393255   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem (1679 bytes)
	I0115 02:58:52.393368   23809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem org=jenkins.ha-680410 san=[127.0.0.1 192.168.39.194 ha-680410 localhost minikube]
	I0115 02:58:52.587892   23809 provision.go:177] copyRemoteCerts
	I0115 02:58:52.587948   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 02:58:52.587976   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.590227   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.590474   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.590522   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.590640   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.590820   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.590974   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.591112   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:58:52.675339   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 02:58:52.675407   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 02:58:52.697515   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 02:58:52.697571   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0115 02:58:52.719673   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 02:58:52.719717   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 02:58:52.741049   23809 provision.go:87] duration metric: took 353.863276ms to configureAuth
	I0115 02:58:52.741067   23809 buildroot.go:189] setting minikube options for container-runtime
	I0115 02:58:52.741254   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:58:52.741281   23809 main.go:141] libmachine: Checking connection to Docker...
	I0115 02:58:52.741293   23809 main.go:141] libmachine: (ha-680410) Calling .GetURL
	I0115 02:58:52.742266   23809 main.go:141] libmachine: (ha-680410) DBG | Using libvirt version 6000000
	I0115 02:58:52.744486   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.744830   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.744857   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.745018   23809 main.go:141] libmachine: Docker is up and running!
	I0115 02:58:52.745034   23809 main.go:141] libmachine: Reticulating splines...
	I0115 02:58:52.745042   23809 client.go:171] duration metric: took 24.758071499s to LocalClient.Create
	I0115 02:58:52.745068   23809 start.go:167] duration metric: took 24.758138882s to libmachine.API.Create "ha-680410"
	I0115 02:58:52.745091   23809 start.go:293] postStartSetup for "ha-680410" (driver="kvm2")
	I0115 02:58:52.745107   23809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 02:58:52.745128   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.745346   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 02:58:52.745382   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.747454   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.747763   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.747784   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.747911   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.748086   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.748206   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.748354   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:58:52.835320   23809 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 02:58:52.839327   23809 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 02:58:52.839349   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/addons for local assets ...
	I0115 02:58:52.839418   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/files for local assets ...
	I0115 02:58:52.839517   23809 filesync.go:149] local asset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> 149542.pem in /etc/ssl/certs
	I0115 02:58:52.839529   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /etc/ssl/certs/149542.pem
	I0115 02:58:52.839648   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 02:58:52.847179   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /etc/ssl/certs/149542.pem (1708 bytes)
	I0115 02:58:52.868606   23809 start.go:296] duration metric: took 123.502219ms for postStartSetup
	I0115 02:58:52.868645   23809 main.go:141] libmachine: (ha-680410) Calling .GetConfigRaw
	I0115 02:58:52.869146   23809 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 02:58:52.871436   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.871764   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.871791   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.872024   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:58:52.872175   23809 start.go:128] duration metric: took 24.900747472s to createHost
	I0115 02:58:52.872194   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.874389   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.874677   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.874702   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.874834   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.874996   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.875128   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.875265   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.875430   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:58:52.875852   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0115 02:58:52.875868   23809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 02:58:52.991518   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705287532.963773010
	
	I0115 02:58:52.991538   23809 fix.go:216] guest clock: 1705287532.963773010
	I0115 02:58:52.991548   23809 fix.go:229] Guest: 2024-01-15 02:58:52.96377301 +0000 UTC Remote: 2024-01-15 02:58:52.872185068 +0000 UTC m=+25.015719209 (delta=91.587942ms)
	I0115 02:58:52.991575   23809 fix.go:200] guest clock delta is within tolerance: 91.587942ms
	I0115 02:58:52.991582   23809 start.go:83] releasing machines lock for "ha-680410", held for 25.02024292s
	I0115 02:58:52.991603   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.991821   23809 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 02:58:52.993928   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.994236   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.994264   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.994392   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.994803   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.994936   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:58:52.995046   23809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 02:58:52.995083   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.995145   23809 ssh_runner.go:195] Run: cat /version.json
	I0115 02:58:52.995169   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:58:52.997819   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.997846   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.998112   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.998141   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.998167   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:52.998191   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:52.998280   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.998384   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:58:52.998454   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.998506   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:58:52.998600   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.998659   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:58:52.998764   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:58:52.998798   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:58:53.106328   23809 ssh_runner.go:195] Run: systemctl --version
	I0115 02:58:53.111741   23809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 02:58:53.117367   23809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 02:58:53.117417   23809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 02:58:53.131855   23809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 02:58:53.131870   23809 start.go:494] detecting cgroup driver to use...
	I0115 02:58:53.131912   23809 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 02:58:53.164602   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 02:58:53.176236   23809 docker.go:217] disabling cri-docker service (if available) ...
	I0115 02:58:53.176289   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 02:58:53.187346   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 02:58:53.198293   23809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 02:58:53.295889   23809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 02:58:53.413165   23809 docker.go:233] disabling docker service ...
	I0115 02:58:53.413227   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 02:58:53.426285   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 02:58:53.436501   23809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 02:58:53.545675   23809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 02:58:53.653772   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 02:58:53.665847   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 02:58:53.682550   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 02:58:53.691204   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 02:58:53.699943   23809 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 02:58:53.699986   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 02:58:53.708535   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:58:53.717167   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 02:58:53.725624   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:58:53.734453   23809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 02:58:53.743425   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 02:58:53.752227   23809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 02:58:53.760003   23809 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 02:58:53.760053   23809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 02:58:53.771991   23809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 02:58:53.779814   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:58:53.884216   23809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 02:58:53.914477   23809 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 02:58:53.914539   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:58:53.918700   23809 retry.go:31] will retry after 951.496472ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0115 02:58:54.870838   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:58:54.876153   23809 start.go:562] Will wait 60s for crictl version
	I0115 02:58:54.876202   23809 ssh_runner.go:195] Run: which crictl
	I0115 02:58:54.879728   23809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 02:58:54.919213   23809 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0115 02:58:54.919276   23809 ssh_runner.go:195] Run: containerd --version
	I0115 02:58:54.947428   23809 ssh_runner.go:195] Run: containerd --version
	I0115 02:58:54.976417   23809 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0115 02:58:54.977575   23809 main.go:141] libmachine: (ha-680410) Calling .GetIP
	I0115 02:58:54.980102   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:54.980468   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:58:54.980493   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:58:54.980638   23809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 02:58:54.984434   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 02:58:54.996841   23809 kubeadm.go:877] updating cluster {Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} ...
	I0115 02:58:54.996930   23809 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:58:54.996966   23809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:58:55.034588   23809 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 02:58:55.034639   23809 ssh_runner.go:195] Run: which lz4
	I0115 02:58:55.038287   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0115 02:58:55.038356   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 02:58:55.042367   23809 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 02:58:55.042397   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0115 02:58:56.707744   23809 containerd.go:548] duration metric: took 1.669411813s to copy over tarball
	I0115 02:58:56.707808   23809 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 02:58:59.439268   23809 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.731440064s)
	I0115 02:58:59.439294   23809 containerd.go:555] duration metric: took 2.731530096s to extract the tarball
	I0115 02:58:59.439301   23809 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 02:58:59.478956   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:58:59.584118   23809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 02:58:59.611585   23809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:58:59.645661   23809 retry.go:31] will retry after 357.409654ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T02:58:59Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0115 02:59:00.003194   23809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 02:59:00.048479   23809 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 02:59:00.048500   23809 cache_images.go:84] Images are preloaded, skipping loading
	I0115 02:59:00.048508   23809 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.28.4 containerd true true} ...
	I0115 02:59:00.048650   23809 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-680410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0115 02:59:00.048713   23809 ssh_runner.go:195] Run: sudo crictl info
	I0115 02:59:00.082962   23809 cni.go:84] Creating CNI manager for ""
	I0115 02:59:00.082987   23809 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0115 02:59:00.083000   23809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0115 02:59:00.083023   23809 kubeadm.go:180] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-680410 NodeName:ha-680410 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 02:59:00.083177   23809 kubeadm.go:186] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-680410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 02:59:00.083206   23809 kube-vip.go:101] generating kube-vip config ...
	I0115 02:59:00.083281   23809 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_ddns
	      value: "false"
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.6.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0115 02:59:00.083339   23809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 02:59:00.092724   23809 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 02:59:00.092784   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0115 02:59:00.101930   23809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0115 02:59:00.117120   23809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 02:59:00.132785   23809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0115 02:59:00.148459   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1265 bytes)
	I0115 02:59:00.163517   23809 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0115 02:59:00.167017   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 02:59:00.177933   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:59:00.276490   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 02:59:00.292964   23809 certs.go:68] Setting up /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410 for IP: 192.168.39.194
	I0115 02:59:00.292980   23809 certs.go:194] generating shared ca certs ...
	I0115 02:59:00.293001   23809 certs.go:226] acquiring lock for ca certs: {Name:mk4b44e68f01694cff17056fe1b88a9d17c4d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:00.293135   23809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key
	I0115 02:59:00.293181   23809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key
	I0115 02:59:00.293192   23809 certs.go:256] generating profile certs ...
	I0115 02:59:00.293249   23809 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key
	I0115 02:59:00.293261   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt with IP's: []
	I0115 02:59:00.989226   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt ...
	I0115 02:59:00.989258   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt: {Name:mkf0142a7c21ef12ae6ae6373ad6ebe719ca4b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:00.989437   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key ...
	I0115 02:59:00.989450   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key: {Name:mk6018755014a1632c637089ca5c3e252e5f2d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:00.989547   23809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.5182c3b8
	I0115 02:59:00.989563   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.5182c3b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194 192.168.39.254]
	I0115 02:59:01.147530   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.5182c3b8 ...
	I0115 02:59:01.147557   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.5182c3b8: {Name:mk1eac799ad83c47e55ca98d5f5e7de325eb259b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:01.147736   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.5182c3b8 ...
	I0115 02:59:01.147758   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.5182c3b8: {Name:mk7cca06559f993cf6cde82356f22c160f4172a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:01.147854   23809 certs.go:381] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.5182c3b8 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt
	I0115 02:59:01.147942   23809 certs.go:385] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.5182c3b8 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key
	I0115 02:59:01.148000   23809 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key
	I0115 02:59:01.148015   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt with IP's: []
	I0115 02:59:01.206142   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt ...
	I0115 02:59:01.206164   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt: {Name:mk6103e92ab2e2ce044b2163a740fbdd519b44b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:01.206312   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key ...
	I0115 02:59:01.206326   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key: {Name:mk9a523ac6d589c235e113f9b3edd6c22e1cdaf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:01.206412   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 02:59:01.206429   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 02:59:01.206439   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 02:59:01.206452   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 02:59:01.206464   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 02:59:01.206477   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 02:59:01.206490   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 02:59:01.206503   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 02:59:01.206555   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem (1338 bytes)
	W0115 02:59:01.206586   23809 certs.go:480] ignoring /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954_empty.pem, impossibly tiny 0 bytes
	I0115 02:59:01.206595   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 02:59:01.206615   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem (1078 bytes)
	I0115 02:59:01.206636   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem (1123 bytes)
	I0115 02:59:01.206658   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem (1679 bytes)
	I0115 02:59:01.206695   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem (1708 bytes)
	I0115 02:59:01.206724   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /usr/share/ca-certificates/149542.pem
	I0115 02:59:01.206738   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:01.206755   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem -> /usr/share/ca-certificates/14954.pem
	I0115 02:59:01.207238   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 02:59:01.238872   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 02:59:01.266439   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 02:59:01.290886   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 02:59:01.321567   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0115 02:59:01.343248   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 02:59:01.365280   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 02:59:01.387437   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 02:59:01.409614   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /usr/share/ca-certificates/149542.pem (1708 bytes)
	I0115 02:59:01.431539   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 02:59:01.453400   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem --> /usr/share/ca-certificates/14954.pem (1338 bytes)
	I0115 02:59:01.475185   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 02:59:01.490668   23809 ssh_runner.go:195] Run: openssl version
	I0115 02:59:01.496688   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14954.pem && ln -fs /usr/share/ca-certificates/14954.pem /etc/ssl/certs/14954.pem"
	I0115 02:59:01.506153   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14954.pem
	I0115 02:59:01.510590   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:54 /usr/share/ca-certificates/14954.pem
	I0115 02:59:01.510637   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14954.pem
	I0115 02:59:01.515925   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14954.pem /etc/ssl/certs/51391683.0"
	I0115 02:59:01.525100   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149542.pem && ln -fs /usr/share/ca-certificates/149542.pem /etc/ssl/certs/149542.pem"
	I0115 02:59:01.534256   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149542.pem
	I0115 02:59:01.538705   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 15 02:54 /usr/share/ca-certificates/149542.pem
	I0115 02:59:01.538754   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149542.pem
	I0115 02:59:01.544176   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149542.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 02:59:01.553892   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 02:59:01.563382   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:01.567918   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 15 02:46 /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:01.567968   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:01.573285   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 02:59:01.582656   23809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0115 02:59:01.586737   23809 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0115 02:59:01.586787   23809 kubeadm.go:391] StartCluster: {Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 02:59:01.586846   23809 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 02:59:01.586877   23809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 02:59:01.625249   23809 cri.go:89] found id: ""
	I0115 02:59:01.625314   23809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 02:59:01.633920   23809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 02:59:01.642245   23809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 02:59:01.650670   23809 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 02:59:01.650683   23809 kubeadm.go:156] found existing configuration files:
	
	I0115 02:59:01.650719   23809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0115 02:59:01.658390   23809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0115 02:59:01.658436   23809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0115 02:59:01.667471   23809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0115 02:59:01.674852   23809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0115 02:59:01.674893   23809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0115 02:59:01.683808   23809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0115 02:59:01.691248   23809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0115 02:59:01.691295   23809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0115 02:59:01.698852   23809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0115 02:59:01.705993   23809 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0115 02:59:01.706024   23809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0115 02:59:01.713540   23809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0115 02:59:01.824792   23809 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0115 02:59:01.824894   23809 kubeadm.go:309] [preflight] Running pre-flight checks
	I0115 02:59:01.971708   23809 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 02:59:01.971825   23809 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 02:59:01.971950   23809 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 02:59:02.196836   23809 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 02:59:02.198809   23809 out.go:204]   - Generating certificates and keys ...
	I0115 02:59:02.198911   23809 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0115 02:59:02.198999   23809 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0115 02:59:02.303288   23809 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 02:59:02.464084   23809 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0115 02:59:02.706555   23809 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0115 02:59:02.803711   23809 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0115 02:59:02.953146   23809 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0115 02:59:02.953437   23809 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-680410 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0115 02:59:03.162158   23809 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0115 02:59:03.162295   23809 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-680410 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0115 02:59:03.289721   23809 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 02:59:03.466079   23809 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 02:59:03.557828   23809 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0115 02:59:03.558088   23809 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 02:59:04.008340   23809 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 02:59:04.135617   23809 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 02:59:04.197203   23809 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 02:59:04.275573   23809 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 02:59:04.277129   23809 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 02:59:04.280751   23809 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 02:59:04.282694   23809 out.go:204]   - Booting up control plane ...
	I0115 02:59:04.282785   23809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 02:59:04.282887   23809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 02:59:04.282974   23809 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 02:59:04.297335   23809 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 02:59:04.298193   23809 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 02:59:04.298321   23809 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0115 02:59:04.410805   23809 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 02:59:13.988169   23809 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.582154 seconds
	I0115 02:59:13.988424   23809 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 02:59:14.008285   23809 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 02:59:14.539012   23809 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 02:59:14.539278   23809 kubeadm.go:309] [mark-control-plane] Marking the node ha-680410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 02:59:15.054941   23809 kubeadm.go:309] [bootstrap-token] Using token: uo86kr.pjq7c4l94qhdmxio
	I0115 02:59:15.056344   23809 out.go:204]   - Configuring RBAC rules ...
	I0115 02:59:15.056451   23809 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 02:59:15.062713   23809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 02:59:15.079045   23809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 02:59:15.081978   23809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 02:59:15.085715   23809 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 02:59:15.088528   23809 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 02:59:15.102910   23809 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 02:59:15.302861   23809 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0115 02:59:15.468650   23809 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0115 02:59:15.468675   23809 kubeadm.go:309] 
	I0115 02:59:15.468730   23809 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0115 02:59:15.468736   23809 kubeadm.go:309] 
	I0115 02:59:15.468829   23809 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0115 02:59:15.468865   23809 kubeadm.go:309] 
	I0115 02:59:15.468912   23809 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0115 02:59:15.468991   23809 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 02:59:15.469080   23809 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 02:59:15.469094   23809 kubeadm.go:309] 
	I0115 02:59:15.469160   23809 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0115 02:59:15.469174   23809 kubeadm.go:309] 
	I0115 02:59:15.469251   23809 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 02:59:15.469261   23809 kubeadm.go:309] 
	I0115 02:59:15.469328   23809 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0115 02:59:15.469433   23809 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 02:59:15.469517   23809 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 02:59:15.469534   23809 kubeadm.go:309] 
	I0115 02:59:15.469642   23809 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 02:59:15.469758   23809 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0115 02:59:15.469769   23809 kubeadm.go:309] 
	I0115 02:59:15.469893   23809 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uo86kr.pjq7c4l94qhdmxio \
	I0115 02:59:15.470052   23809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 \
	I0115 02:59:15.470082   23809 kubeadm.go:309] 	--control-plane 
	I0115 02:59:15.470091   23809 kubeadm.go:309] 
	I0115 02:59:15.470218   23809 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0115 02:59:15.470231   23809 kubeadm.go:309] 
	I0115 02:59:15.470330   23809 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uo86kr.pjq7c4l94qhdmxio \
	I0115 02:59:15.470489   23809 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 
	I0115 02:59:15.470887   23809 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 02:59:15.470947   23809 cni.go:84] Creating CNI manager for ""
	I0115 02:59:15.470960   23809 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0115 02:59:15.472691   23809 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 02:59:15.473991   23809 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 02:59:15.479136   23809 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 02:59:15.479149   23809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 02:59:15.512184   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 02:59:16.514348   23809 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.002112589s)
	I0115 02:59:16.514399   23809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 02:59:16.514482   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:16.514512   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-680410 minikube.k8s.io/updated_at=2024_01_15T02_59_16_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b minikube.k8s.io/name=ha-680410 minikube.k8s.io/primary=true
	I0115 02:59:16.590916   23809 ops.go:34] apiserver oom_adj: -16
	I0115 02:59:16.756085   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:17.256957   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:17.756509   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:18.256482   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:18.756165   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:19.256475   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:19.756574   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:20.256193   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:20.756230   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:21.256483   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:21.756473   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:22.256193   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:22.757079   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:23.256928   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:23.756584   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:24.256999   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:24.757170   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 02:59:24.854518   23809 kubeadm.go:1106] duration metric: took 8.340099407s to wait for elevateKubeSystemPrivileges
	W0115 02:59:24.854556   23809 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0115 02:59:24.854563   23809 kubeadm.go:393] duration metric: took 23.267779901s to StartCluster
	I0115 02:59:24.854584   23809 settings.go:142] acquiring lock: {Name:mk9dadd460779833544b9ee743c73944f5d142f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:24.854668   23809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:59:24.855287   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/kubeconfig: {Name:mkf5d0331212c9d6c1cc4e6eb80eacb35f40ffa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:24.855525   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 02:59:24.855542   23809 start.go:232] HA cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 02:59:24.855565   23809 start.go:240] waiting for startup goroutines ...
	I0115 02:59:24.855572   23809 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 02:59:24.855624   23809 addons.go:69] Setting storage-provisioner=true in profile "ha-680410"
	I0115 02:59:24.855649   23809 addons.go:69] Setting default-storageclass=true in profile "ha-680410"
	I0115 02:59:24.855708   23809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-680410"
	I0115 02:59:24.855720   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:59:24.855654   23809 addons.go:234] Setting addon storage-provisioner=true in "ha-680410"
	I0115 02:59:24.855782   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 02:59:24.856101   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.856128   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.856201   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.856239   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.870155   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42123
	I0115 02:59:24.870170   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0115 02:59:24.870557   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.870603   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.870987   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.871011   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.871102   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.871122   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.871343   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.871400   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.871514   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:59:24.871808   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.871836   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.873335   23809 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:59:24.873526   23809 kapi.go:59] client config for ha-680410: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key", CAFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 02:59:24.874066   23809 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 02:59:24.874167   23809 addons.go:234] Setting addon default-storageclass=true in "ha-680410"
	I0115 02:59:24.874201   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 02:59:24.874488   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.874514   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.886120   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33195
	I0115 02:59:24.886552   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.886985   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.887004   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.887376   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.887547   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:59:24.887685   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0115 02:59:24.888027   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.888601   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.888617   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.888943   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.889096   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:59:24.891145   23809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 02:59:24.889384   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:24.892488   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:24.892565   23809 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 02:59:24.892582   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 02:59:24.892601   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:59:24.895268   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:24.895714   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:59:24.895746   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:24.895909   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:59:24.896056   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:59:24.896180   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:59:24.896328   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:59:24.907116   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0115 02:59:24.907455   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:24.907852   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:24.907870   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:24.908222   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:24.908394   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:59:24.909896   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:59:24.910119   23809 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 02:59:24.910137   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 02:59:24.910148   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:59:24.912454   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:24.912844   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:59:24.912871   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:24.913008   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:59:24.913147   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:59:24.913251   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:59:24.913374   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:59:25.028130   23809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 02:59:25.031629   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 02:59:25.061298   23809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 02:59:26.301601   23809 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.269937964s)
	I0115 02:59:26.301653   23809 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0115 02:59:26.301706   23809 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.240373348s)
	I0115 02:59:26.301755   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.301770   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.301790   23809 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273613504s)
	I0115 02:59:26.301824   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.301840   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.302099   23809 main.go:141] libmachine: (ha-680410) DBG | Closing plugin on server side
	I0115 02:59:26.302125   23809 main.go:141] libmachine: (ha-680410) DBG | Closing plugin on server side
	I0115 02:59:26.302134   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.302152   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.302171   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.302192   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.302251   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.302266   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.302280   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.302289   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.302384   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.302404   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.302421   23809 main.go:141] libmachine: (ha-680410) DBG | Closing plugin on server side
	I0115 02:59:26.302577   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.302630   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.302586   23809 main.go:141] libmachine: (ha-680410) DBG | Closing plugin on server side
	I0115 02:59:26.302750   23809 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0115 02:59:26.302769   23809 round_trippers.go:469] Request Headers:
	I0115 02:59:26.302780   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 02:59:26.302792   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 02:59:26.317368   23809 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0115 02:59:26.318073   23809 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0115 02:59:26.318089   23809 round_trippers.go:469] Request Headers:
	I0115 02:59:26.318101   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 02:59:26.318114   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 02:59:26.318126   23809 round_trippers.go:473]     Content-Type: application/json
	I0115 02:59:26.320646   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 02:59:26.320786   23809 main.go:141] libmachine: Making call to close driver server
	I0115 02:59:26.320802   23809 main.go:141] libmachine: (ha-680410) Calling .Close
	I0115 02:59:26.321053   23809 main.go:141] libmachine: Successfully made call to close driver server
	I0115 02:59:26.321071   23809 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 02:59:26.322811   23809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0115 02:59:26.324145   23809 addons.go:505] duration metric: took 1.468567691s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0115 02:59:26.324191   23809 start.go:245] waiting for cluster config update ...
	I0115 02:59:26.324209   23809 start.go:254] writing updated cluster config ...
	I0115 02:59:26.325931   23809 out.go:177] 
	I0115 02:59:26.327432   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:59:26.327499   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:59:26.329325   23809 out.go:177] * Starting "ha-680410-m02" control-plane node in "ha-680410" cluster
	I0115 02:59:26.330656   23809 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:59:26.330674   23809 cache.go:56] Caching tarball of preloaded images
	I0115 02:59:26.330746   23809 preload.go:173] Found /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 02:59:26.330756   23809 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0115 02:59:26.330813   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:59:26.330976   23809 start.go:360] acquireMachinesLock for ha-680410-m02: {Name:mk08ca2fbfa7e17b9b93de9f109025291dd8cd1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 02:59:26.331031   23809 start.go:364] duration metric: took 33.141µs to acquireMachinesLock for "ha-680410-m02"
	I0115 02:59:26.331051   23809 start.go:93] Provisioning new machine with config: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tru
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 02:59:26.331140   23809 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0115 02:59:26.332874   23809 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0115 02:59:26.332942   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:26.332963   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:26.346540   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0115 02:59:26.346945   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:26.347530   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:26.347557   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:26.347844   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:26.348018   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetMachineName
	I0115 02:59:26.348161   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:26.348303   23809 start.go:159] libmachine.API.Create for "ha-680410" (driver="kvm2")
	I0115 02:59:26.348325   23809 client.go:168] LocalClient.Create starting
	I0115 02:59:26.348354   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem
	I0115 02:59:26.348383   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 02:59:26.348396   23809 main.go:141] libmachine: Parsing certificate...
	I0115 02:59:26.348443   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem
	I0115 02:59:26.348461   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 02:59:26.348472   23809 main.go:141] libmachine: Parsing certificate...
	I0115 02:59:26.348485   23809 main.go:141] libmachine: Running pre-create checks...
	I0115 02:59:26.348493   23809 main.go:141] libmachine: (ha-680410-m02) Calling .PreCreateCheck
	I0115 02:59:26.348642   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetConfigRaw
	I0115 02:59:26.349088   23809 main.go:141] libmachine: Creating machine...
	I0115 02:59:26.349110   23809 main.go:141] libmachine: (ha-680410-m02) Calling .Create
	I0115 02:59:26.349238   23809 main.go:141] libmachine: (ha-680410-m02) Creating KVM machine...
	I0115 02:59:26.350365   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found existing default KVM network
	I0115 02:59:26.350494   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found existing private KVM network mk-ha-680410
	I0115 02:59:26.350612   23809 main.go:141] libmachine: (ha-680410-m02) Setting up store path in /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02 ...
	I0115 02:59:26.350643   23809 main.go:141] libmachine: (ha-680410-m02) Building disk image from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 02:59:26.350696   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:26.350594   24145 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:59:26.350806   23809 main.go:141] libmachine: (ha-680410-m02) Downloading /home/jenkins/minikube-integration/17909-7685/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 02:59:26.550923   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:26.550773   24145 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa...
	I0115 02:59:26.682150   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:26.682041   24145 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/ha-680410-m02.rawdisk...
	I0115 02:59:26.682180   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Writing magic tar header
	I0115 02:59:26.682191   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Writing SSH key tar header
	I0115 02:59:26.682200   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:26.682145   24145 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02 ...
	I0115 02:59:26.682281   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02
	I0115 02:59:26.682338   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines
	I0115 02:59:26.682352   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02 (perms=drwx------)
	I0115 02:59:26.682366   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines (perms=drwxr-xr-x)
	I0115 02:59:26.682382   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube (perms=drwxr-xr-x)
	I0115 02:59:26.682394   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:59:26.682414   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685
	I0115 02:59:26.682427   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 02:59:26.682436   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home/jenkins
	I0115 02:59:26.682450   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Checking permissions on dir: /home
	I0115 02:59:26.682463   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685 (perms=drwxrwxr-x)
	I0115 02:59:26.682474   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Skipping /home - not owner
	I0115 02:59:26.682494   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 02:59:26.682508   23809 main.go:141] libmachine: (ha-680410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 02:59:26.682524   23809 main.go:141] libmachine: (ha-680410-m02) Creating domain...
	I0115 02:59:26.683307   23809 main.go:141] libmachine: (ha-680410-m02) define libvirt domain using xml: 
	I0115 02:59:26.683331   23809 main.go:141] libmachine: (ha-680410-m02) <domain type='kvm'>
	I0115 02:59:26.683342   23809 main.go:141] libmachine: (ha-680410-m02)   <name>ha-680410-m02</name>
	I0115 02:59:26.683356   23809 main.go:141] libmachine: (ha-680410-m02)   <memory unit='MiB'>2200</memory>
	I0115 02:59:26.683365   23809 main.go:141] libmachine: (ha-680410-m02)   <vcpu>2</vcpu>
	I0115 02:59:26.683376   23809 main.go:141] libmachine: (ha-680410-m02)   <features>
	I0115 02:59:26.683385   23809 main.go:141] libmachine: (ha-680410-m02)     <acpi/>
	I0115 02:59:26.683413   23809 main.go:141] libmachine: (ha-680410-m02)     <apic/>
	I0115 02:59:26.683423   23809 main.go:141] libmachine: (ha-680410-m02)     <pae/>
	I0115 02:59:26.683435   23809 main.go:141] libmachine: (ha-680410-m02)     
	I0115 02:59:26.683449   23809 main.go:141] libmachine: (ha-680410-m02)   </features>
	I0115 02:59:26.683461   23809 main.go:141] libmachine: (ha-680410-m02)   <cpu mode='host-passthrough'>
	I0115 02:59:26.683473   23809 main.go:141] libmachine: (ha-680410-m02)   
	I0115 02:59:26.683485   23809 main.go:141] libmachine: (ha-680410-m02)   </cpu>
	I0115 02:59:26.683513   23809 main.go:141] libmachine: (ha-680410-m02)   <os>
	I0115 02:59:26.683535   23809 main.go:141] libmachine: (ha-680410-m02)     <type>hvm</type>
	I0115 02:59:26.683547   23809 main.go:141] libmachine: (ha-680410-m02)     <boot dev='cdrom'/>
	I0115 02:59:26.683560   23809 main.go:141] libmachine: (ha-680410-m02)     <boot dev='hd'/>
	I0115 02:59:26.683572   23809 main.go:141] libmachine: (ha-680410-m02)     <bootmenu enable='no'/>
	I0115 02:59:26.683586   23809 main.go:141] libmachine: (ha-680410-m02)   </os>
	I0115 02:59:26.683599   23809 main.go:141] libmachine: (ha-680410-m02)   <devices>
	I0115 02:59:26.683612   23809 main.go:141] libmachine: (ha-680410-m02)     <disk type='file' device='cdrom'>
	I0115 02:59:26.683631   23809 main.go:141] libmachine: (ha-680410-m02)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/boot2docker.iso'/>
	I0115 02:59:26.683643   23809 main.go:141] libmachine: (ha-680410-m02)       <target dev='hdc' bus='scsi'/>
	I0115 02:59:26.683658   23809 main.go:141] libmachine: (ha-680410-m02)       <readonly/>
	I0115 02:59:26.683667   23809 main.go:141] libmachine: (ha-680410-m02)     </disk>
	I0115 02:59:26.683674   23809 main.go:141] libmachine: (ha-680410-m02)     <disk type='file' device='disk'>
	I0115 02:59:26.683684   23809 main.go:141] libmachine: (ha-680410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 02:59:26.683692   23809 main.go:141] libmachine: (ha-680410-m02)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/ha-680410-m02.rawdisk'/>
	I0115 02:59:26.683700   23809 main.go:141] libmachine: (ha-680410-m02)       <target dev='hda' bus='virtio'/>
	I0115 02:59:26.683707   23809 main.go:141] libmachine: (ha-680410-m02)     </disk>
	I0115 02:59:26.683718   23809 main.go:141] libmachine: (ha-680410-m02)     <interface type='network'>
	I0115 02:59:26.683737   23809 main.go:141] libmachine: (ha-680410-m02)       <source network='mk-ha-680410'/>
	I0115 02:59:26.683754   23809 main.go:141] libmachine: (ha-680410-m02)       <model type='virtio'/>
	I0115 02:59:26.683767   23809 main.go:141] libmachine: (ha-680410-m02)     </interface>
	I0115 02:59:26.683779   23809 main.go:141] libmachine: (ha-680410-m02)     <interface type='network'>
	I0115 02:59:26.683790   23809 main.go:141] libmachine: (ha-680410-m02)       <source network='default'/>
	I0115 02:59:26.683801   23809 main.go:141] libmachine: (ha-680410-m02)       <model type='virtio'/>
	I0115 02:59:26.683815   23809 main.go:141] libmachine: (ha-680410-m02)     </interface>
	I0115 02:59:26.683831   23809 main.go:141] libmachine: (ha-680410-m02)     <serial type='pty'>
	I0115 02:59:26.683848   23809 main.go:141] libmachine: (ha-680410-m02)       <target port='0'/>
	I0115 02:59:26.683860   23809 main.go:141] libmachine: (ha-680410-m02)     </serial>
	I0115 02:59:26.683873   23809 main.go:141] libmachine: (ha-680410-m02)     <console type='pty'>
	I0115 02:59:26.683881   23809 main.go:141] libmachine: (ha-680410-m02)       <target type='serial' port='0'/>
	I0115 02:59:26.683891   23809 main.go:141] libmachine: (ha-680410-m02)     </console>
	I0115 02:59:26.683908   23809 main.go:141] libmachine: (ha-680410-m02)     <rng model='virtio'>
	I0115 02:59:26.683924   23809 main.go:141] libmachine: (ha-680410-m02)       <backend model='random'>/dev/random</backend>
	I0115 02:59:26.683935   23809 main.go:141] libmachine: (ha-680410-m02)     </rng>
	I0115 02:59:26.683945   23809 main.go:141] libmachine: (ha-680410-m02)     
	I0115 02:59:26.683956   23809 main.go:141] libmachine: (ha-680410-m02)     
	I0115 02:59:26.683965   23809 main.go:141] libmachine: (ha-680410-m02)   </devices>
	I0115 02:59:26.683979   23809 main.go:141] libmachine: (ha-680410-m02) </domain>
	I0115 02:59:26.683995   23809 main.go:141] libmachine: (ha-680410-m02) 
	I0115 02:59:26.690205   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:19:e5:0c in network default
	I0115 02:59:26.690783   23809 main.go:141] libmachine: (ha-680410-m02) Ensuring networks are active...
	I0115 02:59:26.690802   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:26.691366   23809 main.go:141] libmachine: (ha-680410-m02) Ensuring network default is active
	I0115 02:59:26.691764   23809 main.go:141] libmachine: (ha-680410-m02) Ensuring network mk-ha-680410 is active
	I0115 02:59:26.692145   23809 main.go:141] libmachine: (ha-680410-m02) Getting domain xml...
	I0115 02:59:26.692917   23809 main.go:141] libmachine: (ha-680410-m02) Creating domain...
	I0115 02:59:27.858093   23809 main.go:141] libmachine: (ha-680410-m02) Waiting to get IP...
	I0115 02:59:27.858797   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:27.859193   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:27.859250   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:27.859187   24145 retry.go:31] will retry after 260.706878ms: waiting for machine to come up
	I0115 02:59:28.121668   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:28.122089   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:28.122115   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:28.122065   24145 retry.go:31] will retry after 387.419657ms: waiting for machine to come up
	I0115 02:59:28.510532   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:28.510996   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:28.511019   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:28.510934   24145 retry.go:31] will retry after 468.864898ms: waiting for machine to come up
	I0115 02:59:28.981613   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:28.982034   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:28.982058   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:28.981984   24145 retry.go:31] will retry after 575.195399ms: waiting for machine to come up
	I0115 02:59:29.558383   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:29.558883   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:29.558917   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:29.558823   24145 retry.go:31] will retry after 729.236253ms: waiting for machine to come up
	I0115 02:59:30.289099   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:30.289481   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:30.289511   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:30.289428   24145 retry.go:31] will retry after 829.478965ms: waiting for machine to come up
	I0115 02:59:31.121576   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:31.122084   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:31.122114   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:31.122045   24145 retry.go:31] will retry after 1.035714115s: waiting for machine to come up
	I0115 02:59:32.159626   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:32.160096   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:32.160119   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:32.160045   24145 retry.go:31] will retry after 1.19378826s: waiting for machine to come up
	I0115 02:59:33.355434   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:33.355910   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:33.355941   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:33.355853   24145 retry.go:31] will retry after 1.766332935s: waiting for machine to come up
	I0115 02:59:35.124834   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:35.125308   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:35.125347   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:35.125237   24145 retry.go:31] will retry after 2.009274852s: waiting for machine to come up
	I0115 02:59:37.135745   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:37.136228   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:37.136264   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:37.136154   24145 retry.go:31] will retry after 2.052928537s: waiting for machine to come up
	I0115 02:59:39.190454   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:39.191026   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:39.191057   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:39.190965   24145 retry.go:31] will retry after 3.049894642s: waiting for machine to come up
	I0115 02:59:42.242396   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:42.242889   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:42.242918   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:42.242836   24145 retry.go:31] will retry after 3.604090845s: waiting for machine to come up
	I0115 02:59:45.848336   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:45.848726   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find current IP address of domain ha-680410-m02 in network mk-ha-680410
	I0115 02:59:45.848749   23809 main.go:141] libmachine: (ha-680410-m02) DBG | I0115 02:59:45.848689   24145 retry.go:31] will retry after 3.507386872s: waiting for machine to come up
	I0115 02:59:49.359121   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.359498   23809 main.go:141] libmachine: (ha-680410-m02) Found IP for machine: 192.168.39.178
	I0115 02:59:49.359525   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has current primary IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.359536   23809 main.go:141] libmachine: (ha-680410-m02) Reserving static IP address...
	I0115 02:59:49.359840   23809 main.go:141] libmachine: (ha-680410-m02) DBG | unable to find host DHCP lease matching {name: "ha-680410-m02", mac: "52:54:00:46:bb:0b", ip: "192.168.39.178"} in network mk-ha-680410
	I0115 02:59:49.428463   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Getting to WaitForSSH function...
	I0115 02:59:49.428496   23809 main.go:141] libmachine: (ha-680410-m02) Reserved static IP address: 192.168.39.178
	I0115 02:59:49.428512   23809 main.go:141] libmachine: (ha-680410-m02) Waiting for SSH to be available...
	I0115 02:59:49.430912   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.431308   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:minikube Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.431333   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.431520   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Using SSH client type: external
	I0115 02:59:49.431541   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa (-rw-------)
	I0115 02:59:49.431562   23809 main.go:141] libmachine: (ha-680410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 02:59:49.431572   23809 main.go:141] libmachine: (ha-680410-m02) DBG | About to run SSH command:
	I0115 02:59:49.431589   23809 main.go:141] libmachine: (ha-680410-m02) DBG | exit 0
	I0115 02:59:49.518740   23809 main.go:141] libmachine: (ha-680410-m02) DBG | SSH cmd err, output: <nil>: 
	I0115 02:59:49.518934   23809 main.go:141] libmachine: (ha-680410-m02) KVM machine creation complete!
	I0115 02:59:49.519240   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetConfigRaw
	I0115 02:59:49.519751   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:49.519975   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:49.520135   23809 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 02:59:49.520152   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 02:59:49.521402   23809 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 02:59:49.521420   23809 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 02:59:49.521429   23809 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 02:59:49.521439   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.523689   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.524022   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.524052   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.524146   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:49.524356   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.524522   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.524668   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:49.524813   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:49.525198   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:49.525213   23809 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 02:59:49.630394   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:59:49.630413   23809 main.go:141] libmachine: Detecting the provisioner...
	I0115 02:59:49.630423   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.632873   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.633244   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.633266   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.633412   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:49.633585   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.633716   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.633820   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:49.633948   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:49.634260   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:49.634275   23809 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 02:59:49.739957   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 02:59:49.740036   23809 main.go:141] libmachine: found compatible host: buildroot
	I0115 02:59:49.740050   23809 main.go:141] libmachine: Provisioning with buildroot...
	I0115 02:59:49.740063   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetMachineName
	I0115 02:59:49.740332   23809 buildroot.go:166] provisioning hostname "ha-680410-m02"
	I0115 02:59:49.740358   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetMachineName
	I0115 02:59:49.740506   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.742938   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.743208   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.743235   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.743374   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:49.743548   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.743702   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.743853   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:49.744018   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:49.744376   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:49.744390   23809 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-680410-m02 && echo "ha-680410-m02" | sudo tee /etc/hostname
	I0115 02:59:49.864408   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-680410-m02
	
	I0115 02:59:49.864442   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.867017   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.867360   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.867381   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.867568   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:49.867763   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.867943   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:49.868070   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:49.868308   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:49.868646   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:49.868665   23809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-680410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-680410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-680410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 02:59:49.983674   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 02:59:49.983707   23809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17909-7685/.minikube CaCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17909-7685/.minikube}
	I0115 02:59:49.983726   23809 buildroot.go:174] setting up certificates
	I0115 02:59:49.983736   23809 provision.go:84] configureAuth start
	I0115 02:59:49.983747   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetMachineName
	I0115 02:59:49.984039   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 02:59:49.986567   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.986969   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.987007   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.987138   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:49.989073   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.989388   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:49.989424   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:49.989528   23809 provision.go:143] copyHostCerts
	I0115 02:59:49.989549   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 02:59:49.989574   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem, removing ...
	I0115 02:59:49.989582   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 02:59:49.989654   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem (1078 bytes)
	I0115 02:59:49.989720   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 02:59:49.989740   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem, removing ...
	I0115 02:59:49.989747   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 02:59:49.989769   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem (1123 bytes)
	I0115 02:59:49.989809   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 02:59:49.989824   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem, removing ...
	I0115 02:59:49.989830   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 02:59:49.989850   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem (1679 bytes)
	I0115 02:59:49.989894   23809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem org=jenkins.ha-680410-m02 san=[127.0.0.1 192.168.39.178 ha-680410-m02 localhost minikube]
	I0115 02:59:50.294184   23809 provision.go:177] copyRemoteCerts
	I0115 02:59:50.294238   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 02:59:50.294259   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:50.296954   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.297289   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.297323   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.297435   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.297638   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.297806   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.297994   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	I0115 02:59:50.380228   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 02:59:50.380285   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 02:59:50.402309   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 02:59:50.402372   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0115 02:59:50.423065   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 02:59:50.423112   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 02:59:50.444611   23809 provision.go:87] duration metric: took 460.864546ms to configureAuth
	I0115 02:59:50.444630   23809 buildroot.go:189] setting minikube options for container-runtime
	I0115 02:59:50.444787   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:59:50.444805   23809 main.go:141] libmachine: Checking connection to Docker...
	I0115 02:59:50.444814   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetURL
	I0115 02:59:50.445924   23809 main.go:141] libmachine: (ha-680410-m02) DBG | Using libvirt version 6000000
	I0115 02:59:50.447919   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.448188   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.448223   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.448328   23809 main.go:141] libmachine: Docker is up and running!
	I0115 02:59:50.448341   23809 main.go:141] libmachine: Reticulating splines...
	I0115 02:59:50.448347   23809 client.go:171] duration metric: took 24.100015468s to LocalClient.Create
	I0115 02:59:50.448366   23809 start.go:167] duration metric: took 24.100066383s to libmachine.API.Create "ha-680410"
	I0115 02:59:50.448375   23809 start.go:293] postStartSetup for "ha-680410-m02" (driver="kvm2")
	I0115 02:59:50.448386   23809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 02:59:50.448402   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.448612   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 02:59:50.448631   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:50.450564   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.450922   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.450950   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.451048   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.451195   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.451339   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.451457   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	I0115 02:59:50.536460   23809 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 02:59:50.540499   23809 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 02:59:50.540519   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/addons for local assets ...
	I0115 02:59:50.540584   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/files for local assets ...
	I0115 02:59:50.540674   23809 filesync.go:149] local asset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> 149542.pem in /etc/ssl/certs
	I0115 02:59:50.540687   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /etc/ssl/certs/149542.pem
	I0115 02:59:50.540816   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 02:59:50.548879   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /etc/ssl/certs/149542.pem (1708 bytes)
	I0115 02:59:50.571160   23809 start.go:296] duration metric: took 122.771281ms for postStartSetup
	I0115 02:59:50.571207   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetConfigRaw
	I0115 02:59:50.571783   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 02:59:50.574313   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.574631   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.574657   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.574866   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 02:59:50.575060   23809 start.go:128] duration metric: took 24.243905256s to createHost
	I0115 02:59:50.575084   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:50.577092   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.577461   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.577503   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.577666   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.577849   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.578008   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.578148   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.578309   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 02:59:50.578699   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0115 02:59:50.578713   23809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 02:59:50.688205   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705287590.667352173
	
	I0115 02:59:50.688222   23809 fix.go:216] guest clock: 1705287590.667352173
	I0115 02:59:50.688229   23809 fix.go:229] Guest: 2024-01-15 02:59:50.667352173 +0000 UTC Remote: 2024-01-15 02:59:50.575073246 +0000 UTC m=+82.718607387 (delta=92.278927ms)
	I0115 02:59:50.688242   23809 fix.go:200] guest clock delta is within tolerance: 92.278927ms
	I0115 02:59:50.688247   23809 start.go:83] releasing machines lock for "ha-680410-m02", held for 24.357207925s
	I0115 02:59:50.688267   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.688536   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 02:59:50.691195   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.691525   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.691562   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.694213   23809 out.go:177] * Found network options:
	I0115 02:59:50.695754   23809 out.go:177]   - NO_PROXY=192.168.39.194
	W0115 02:59:50.697133   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 02:59:50.697168   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.697634   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.697795   23809 main.go:141] libmachine: (ha-680410-m02) Calling .DriverName
	I0115 02:59:50.697874   23809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 02:59:50.697912   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	W0115 02:59:50.697997   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 02:59:50.698070   23809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 02:59:50.698094   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHHostname
	I0115 02:59:50.700663   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.700683   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.701014   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.701044   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.701075   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:50.701096   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:50.701247   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.701354   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHPort
	I0115 02:59:50.701431   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.701518   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHKeyPath
	I0115 02:59:50.701536   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.701625   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetSSHUsername
	I0115 02:59:50.701678   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	I0115 02:59:50.701734   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m02/id_rsa Username:docker}
	W0115 02:59:50.801935   23809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 02:59:50.802004   23809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 02:59:50.818063   23809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 02:59:50.818081   23809 start.go:494] detecting cgroup driver to use...
	I0115 02:59:50.818136   23809 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 02:59:50.850336   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 02:59:50.862144   23809 docker.go:217] disabling cri-docker service (if available) ...
	I0115 02:59:50.862188   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 02:59:50.877268   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 02:59:50.890432   23809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 02:59:51.000043   23809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 02:59:51.108329   23809 docker.go:233] disabling docker service ...
	I0115 02:59:51.108385   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 02:59:51.121033   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 02:59:51.132201   23809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 02:59:51.229539   23809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 02:59:51.323601   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 02:59:51.335123   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 02:59:51.351425   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 02:59:51.360848   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 02:59:51.369586   23809 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 02:59:51.369624   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 02:59:51.378610   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:59:51.387425   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 02:59:51.396588   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 02:59:51.405103   23809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 02:59:51.414220   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 02:59:51.423286   23809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 02:59:51.431336   23809 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 02:59:51.431376   23809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 02:59:51.443990   23809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 02:59:51.452319   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 02:59:51.556825   23809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 02:59:51.587255   23809 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 02:59:51.587318   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:59:51.592708   23809 retry.go:31] will retry after 1.426358479s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0115 02:59:53.019728   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 02:59:53.025273   23809 start.go:562] Will wait 60s for crictl version
	I0115 02:59:53.025325   23809 ssh_runner.go:195] Run: which crictl
	I0115 02:59:53.029537   23809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 02:59:53.069082   23809 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0115 02:59:53.069145   23809 ssh_runner.go:195] Run: containerd --version
	I0115 02:59:53.095070   23809 ssh_runner.go:195] Run: containerd --version
	I0115 02:59:53.125400   23809 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0115 02:59:53.127079   23809 out.go:177]   - env NO_PROXY=192.168.39.194
	I0115 02:59:53.128589   23809 main.go:141] libmachine: (ha-680410-m02) Calling .GetIP
	I0115 02:59:53.131271   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:53.131652   23809 main.go:141] libmachine: (ha-680410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:bb:0b", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:59:42 +0000 UTC Type:0 Mac:52:54:00:46:bb:0b Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-680410-m02 Clientid:01:52:54:00:46:bb:0b}
	I0115 02:59:53.131673   23809 main.go:141] libmachine: (ha-680410-m02) DBG | domain ha-680410-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:46:bb:0b in network mk-ha-680410
	I0115 02:59:53.131848   23809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 02:59:53.135782   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 02:59:53.148234   23809 mustload.go:65] Loading cluster: ha-680410
	I0115 02:59:53.148428   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:59:53.148780   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:53.148813   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:53.163169   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33503
	I0115 02:59:53.163534   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:53.163942   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:53.163964   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:53.164227   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:53.164401   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 02:59:53.165595   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 02:59:53.165847   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:59:53.165867   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:59:53.178905   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I0115 02:59:53.179250   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:59:53.179649   23809 main.go:141] libmachine: Using API Version  1
	I0115 02:59:53.179673   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:59:53.179942   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:59:53.180118   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 02:59:53.180274   23809 certs.go:68] Setting up /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410 for IP: 192.168.39.178
	I0115 02:59:53.180286   23809 certs.go:194] generating shared ca certs ...
	I0115 02:59:53.180303   23809 certs.go:226] acquiring lock for ca certs: {Name:mk4b44e68f01694cff17056fe1b88a9d17c4d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:53.180433   23809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key
	I0115 02:59:53.180491   23809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key
	I0115 02:59:53.180504   23809 certs.go:256] generating profile certs ...
	I0115 02:59:53.180600   23809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key
	I0115 02:59:53.180631   23809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.782249d0
	I0115 02:59:53.180651   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.782249d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194 192.168.39.178 192.168.39.254]
	I0115 02:59:53.328651   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.782249d0 ...
	I0115 02:59:53.328673   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.782249d0: {Name:mk17a24c2a124432866ca036d582c795468142b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:53.328814   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.782249d0 ...
	I0115 02:59:53.328826   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.782249d0: {Name:mk6269895e33577cb314f33bcc0b0cb879fcbb31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:59:53.328891   23809 certs.go:381] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.782249d0 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt
	I0115 02:59:53.328993   23809 certs.go:385] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.782249d0 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key
	I0115 02:59:53.329105   23809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key
	I0115 02:59:53.329119   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 02:59:53.329130   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 02:59:53.329140   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 02:59:53.329150   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 02:59:53.329160   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 02:59:53.329170   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 02:59:53.329180   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 02:59:53.329189   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 02:59:53.329231   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem (1338 bytes)
	W0115 02:59:53.329261   23809 certs.go:480] ignoring /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954_empty.pem, impossibly tiny 0 bytes
	I0115 02:59:53.329270   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 02:59:53.329289   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem (1078 bytes)
	I0115 02:59:53.329310   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem (1123 bytes)
	I0115 02:59:53.329330   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem (1679 bytes)
	I0115 02:59:53.329368   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem (1708 bytes)
	I0115 02:59:53.329394   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:53.329407   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem -> /usr/share/ca-certificates/14954.pem
	I0115 02:59:53.329419   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /usr/share/ca-certificates/149542.pem
	I0115 02:59:53.329447   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 02:59:53.331989   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:53.332401   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 02:59:53.332422   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 02:59:53.332571   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 02:59:53.332744   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 02:59:53.332873   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 02:59:53.333000   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 02:59:53.411697   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0115 02:59:53.415973   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0115 02:59:53.426847   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0115 02:59:53.430520   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0115 02:59:53.440838   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0115 02:59:53.444799   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0115 02:59:53.454889   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0115 02:59:53.458873   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0115 02:59:53.472436   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0115 02:59:53.476972   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0115 02:59:53.487216   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0115 02:59:53.491206   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0115 02:59:53.501401   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 02:59:53.525062   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 02:59:53.546761   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 02:59:53.568620   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 02:59:53.589940   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0115 02:59:53.610824   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 02:59:53.631657   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 02:59:53.652570   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 02:59:53.676934   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 02:59:53.700039   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem --> /usr/share/ca-certificates/14954.pem (1338 bytes)
	I0115 02:59:53.722902   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /usr/share/ca-certificates/149542.pem (1708 bytes)
	I0115 02:59:53.746152   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0115 02:59:53.763154   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0115 02:59:53.779772   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0115 02:59:53.795069   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0115 02:59:53.809685   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0115 02:59:53.826297   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0115 02:59:53.842644   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0115 02:59:53.858787   23809 ssh_runner.go:195] Run: openssl version
	I0115 02:59:53.864120   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14954.pem && ln -fs /usr/share/ca-certificates/14954.pem /etc/ssl/certs/14954.pem"
	I0115 02:59:53.875463   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14954.pem
	I0115 02:59:53.879871   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:54 /usr/share/ca-certificates/14954.pem
	I0115 02:59:53.879914   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14954.pem
	I0115 02:59:53.885350   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14954.pem /etc/ssl/certs/51391683.0"
	I0115 02:59:53.896959   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149542.pem && ln -fs /usr/share/ca-certificates/149542.pem /etc/ssl/certs/149542.pem"
	I0115 02:59:53.908090   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149542.pem
	I0115 02:59:53.912514   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 15 02:54 /usr/share/ca-certificates/149542.pem
	I0115 02:59:53.912553   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149542.pem
	I0115 02:59:53.918007   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149542.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 02:59:53.929255   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 02:59:53.940631   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:53.945253   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 15 02:46 /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:53.945291   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 02:59:53.951649   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 02:59:53.962958   23809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0115 02:59:53.967057   23809 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0115 02:59:53.967104   23809 kubeadm.go:928] updating node {m02 192.168.39.178 8443 v1.28.4 containerd true true} ...
	I0115 02:59:53.967195   23809 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-680410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0115 02:59:53.967224   23809 kube-vip.go:101] generating kube-vip config ...
	I0115 02:59:53.967257   23809 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_ddns
	      value: "false"
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.6.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0115 02:59:53.967296   23809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 02:59:53.977131   23809 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0115 02:59:53.977172   23809 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0115 02:59:53.986826   23809 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0115 02:59:53.986844   23809 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0115 02:59:53.986858   23809 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0115 02:59:53.986863   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 02:59:53.987026   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 02:59:53.993880   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0115 02:59:53.993903   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0115 03:00:25.831535   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 03:00:25.831622   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 03:00:25.836545   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0115 03:00:25.836588   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0115 03:01:03.555508   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:01:03.571109   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 03:01:03.571218   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 03:01:03.575682   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0115 03:01:03.575715   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0115 03:01:04.047156   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0115 03:01:04.055233   23809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0115 03:01:04.070862   23809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 03:01:04.086351   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1265 bytes)
	I0115 03:01:04.102692   23809 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0115 03:01:04.106268   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 03:01:04.118371   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:01:04.221482   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 03:01:04.238608   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:01:04.238958   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:01:04.238996   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:01:04.253102   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I0115 03:01:04.253478   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:01:04.253904   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:01:04.253924   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:01:04.254250   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:01:04.254446   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:01:04.254593   23809 start.go:316] joinCluster: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 03:01:04.254678   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 03:01:04.254692   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:01:04.257331   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:01:04.257723   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:01:04.257757   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:01:04.257855   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:01:04.258004   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:01:04.258158   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:01:04.258314   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:01:04.445679   23809 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:01:04.445718   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t81wp4.fpof4owqtmts2vhf --discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-680410-m02 --control-plane --apiserver-advertise-address=192.168.39.178 --apiserver-bind-port=8443"
	I0115 03:01:41.573741   23809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t81wp4.fpof4owqtmts2vhf --discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-680410-m02 --control-plane --apiserver-advertise-address=192.168.39.178 --apiserver-bind-port=8443": (37.127963039s)
	I0115 03:01:41.573771   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 03:01:42.042507   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-680410-m02 minikube.k8s.io/updated_at=2024_01_15T03_01_42_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b minikube.k8s.io/name=ha-680410 minikube.k8s.io/primary=false
	I0115 03:01:42.153170   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-680410-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0115 03:01:42.298215   23809 start.go:318] duration metric: took 38.043615498s to joinCluster
	I0115 03:01:42.298299   23809 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:01:42.299915   23809 out.go:177] * Verifying Kubernetes components...
	I0115 03:01:42.298560   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:01:42.301477   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:01:42.494473   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 03:01:42.516088   23809 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 03:01:42.516337   23809 kapi.go:59] client config for ha-680410: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key", CAFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0115 03:01:42.516460   23809 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.194:8443
	I0115 03:01:42.516734   23809 node_ready.go:35] waiting up to 6m0s for node "ha-680410-m02" to be "Ready" ...
	I0115 03:01:42.516846   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:42.516858   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:42.516869   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:42.516882   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:42.527693   23809 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0115 03:01:43.017911   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:43.017935   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:43.017947   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:43.017955   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:43.023042   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:43.516991   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:43.517012   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:43.517020   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:43.517026   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:43.519869   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:44.017047   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:44.017067   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:44.017075   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:44.017081   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:44.020707   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:44.517543   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:44.517563   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:44.517571   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:44.517576   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:44.522683   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:44.523683   23809 node_ready.go:53] node "ha-680410-m02" has status "Ready":"False"
	I0115 03:01:45.017074   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:45.017094   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:45.017102   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:45.017108   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:45.020693   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:45.517922   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:45.517943   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:45.517950   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:45.517957   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:45.521364   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:46.017588   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:46.017609   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:46.017616   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:46.017623   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:46.023096   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:46.517590   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:46.517615   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:46.517623   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:46.517629   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:46.521427   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:47.017600   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:47.017619   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:47.017627   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:47.017633   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:47.021486   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:47.022121   23809 node_ready.go:53] node "ha-680410-m02" has status "Ready":"False"
	I0115 03:01:47.517551   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:47.517571   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:47.517579   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:47.517585   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:47.520938   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:48.017395   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:48.017418   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:48.017430   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:48.017439   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:48.021348   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:48.517140   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:48.517166   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:48.517177   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:48.517187   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:48.520787   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:49.017616   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:49.017636   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:49.017644   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:49.017650   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:49.021413   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:49.517861   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:49.517882   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:49.517891   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:49.517900   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:49.521962   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:49.522653   23809 node_ready.go:53] node "ha-680410-m02" has status "Ready":"False"
	I0115 03:01:50.017028   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:50.017050   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:50.017061   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:50.017068   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:50.020707   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:50.517823   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:50.517845   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:50.517855   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:50.517864   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:50.523559   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:51.017678   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:51.017704   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.017716   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.017726   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.021625   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.516974   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:51.516995   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.517002   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.517008   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.520806   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.521427   23809 node_ready.go:49] node "ha-680410-m02" has status "Ready":"True"
	I0115 03:01:51.521443   23809 node_ready.go:38] duration metric: took 9.004675462s for node "ha-680410-m02" to be "Ready" ...
	I0115 03:01:51.521450   23809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 03:01:51.521496   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:51.521505   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.521511   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.521517   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.527611   23809 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0115 03:01:51.533876   23809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.533942   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-krvzt
	I0115 03:01:51.533950   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.533957   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.533963   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.537020   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.537599   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.537611   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.537619   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.537627   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.540292   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.540805   23809 pod_ready.go:92] pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.540820   23809 pod_ready.go:81] duration metric: took 6.924523ms for pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.540827   23809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.540872   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mqq9g
	I0115 03:01:51.540879   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.540886   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.540892   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.543321   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.543909   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.543923   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.543930   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.543935   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.547116   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.547692   23809 pod_ready.go:92] pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.547706   23809 pod_ready.go:81] duration metric: took 6.874076ms for pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.547714   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.547757   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410
	I0115 03:01:51.547765   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.547771   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.547777   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.550488   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.550965   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.550978   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.550984   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.550990   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.553562   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.554090   23809 pod_ready.go:92] pod "etcd-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.554103   23809 pod_ready.go:81] duration metric: took 6.384351ms for pod "etcd-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.554110   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.554148   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m02
	I0115 03:01:51.554154   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.554161   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.554167   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.556681   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.557359   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:51.557374   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.557384   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.557394   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.559722   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:01:51.560215   23809 pod_ready.go:92] pod "etcd-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.560234   23809 pod_ready.go:81] duration metric: took 6.118371ms for pod "etcd-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.560262   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.717617   23809 request.go:629] Waited for 157.297637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410
	I0115 03:01:51.717678   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410
	I0115 03:01:51.717683   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.717691   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.717704   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.722151   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:51.917100   23809 request.go:629] Waited for 194.268802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.917155   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:51.917164   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:51.917189   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:51.917200   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:51.920421   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:51.921123   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:51.921141   23809 pod_ready.go:81] duration metric: took 360.869197ms for pod "kube-apiserver-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:51.921149   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:52.117323   23809 request.go:629] Waited for 196.116212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:01:52.117385   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:01:52.117392   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.117400   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.117408   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.121525   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:52.317048   23809 request.go:629] Waited for 194.270712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:52.317100   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:52.317113   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.317124   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.317137   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.322580   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:52.517009   23809 request.go:629] Waited for 95.179445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:01:52.517069   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:01:52.517074   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.517082   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.517088   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.520851   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:52.717892   23809 request.go:629] Waited for 196.36141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:52.717965   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:52.717972   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.717983   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.717994   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.721752   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:52.722302   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:52.722320   23809 pod_ready.go:81] duration metric: took 801.163112ms for pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:52.722331   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:52.917542   23809 request.go:629] Waited for 195.153703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410
	I0115 03:01:52.917621   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410
	I0115 03:01:52.917632   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:52.917644   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:52.917655   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:52.921018   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.117005   23809 request.go:629] Waited for 195.158682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:53.117058   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:53.117063   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.117072   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.117081   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.120878   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.121428   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:53.121448   23809 pod_ready.go:81] duration metric: took 399.107978ms for pod "kube-controller-manager-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.121460   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.317099   23809 request.go:629] Waited for 195.562521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m02
	I0115 03:01:53.317157   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m02
	I0115 03:01:53.317163   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.317171   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.317181   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.320265   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.517579   23809 request.go:629] Waited for 196.328604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:53.517647   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:53.517659   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.517666   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.517674   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.520876   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.521331   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:53.521351   23809 pod_ready.go:81] duration metric: took 399.883559ms for pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.521362   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2kmv" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.717539   23809 request.go:629] Waited for 196.108117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2kmv
	I0115 03:01:53.717605   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2kmv
	I0115 03:01:53.717611   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.717619   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.717628   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.722084   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:53.917103   23809 request.go:629] Waited for 194.279747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:53.917165   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:53.917171   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:53.917178   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:53.917184   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:53.920330   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:53.921239   23809 pod_ready.go:92] pod "kube-proxy-g2kmv" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:53.921256   23809 pod_ready.go:81] duration metric: took 399.88799ms for pod "kube-proxy-g2kmv" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:53.921268   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hlbjr" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.117368   23809 request.go:629] Waited for 196.040589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hlbjr
	I0115 03:01:54.117467   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hlbjr
	I0115 03:01:54.117489   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.117500   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.117510   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.120934   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:54.318006   23809 request.go:629] Waited for 196.282183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:54.318059   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:54.318064   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.318078   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.318093   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.325742   23809 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0115 03:01:54.326424   23809 pod_ready.go:92] pod "kube-proxy-hlbjr" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:54.326447   23809 pod_ready.go:81] duration metric: took 405.170732ms for pod "kube-proxy-hlbjr" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.326459   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.517523   23809 request.go:629] Waited for 190.982989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410
	I0115 03:01:54.517581   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410
	I0115 03:01:54.517588   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.517597   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.517607   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.522042   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:54.717985   23809 request.go:629] Waited for 195.356286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:54.718040   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:01:54.718045   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.718052   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.718071   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.722936   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:01:54.723621   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:54.723639   23809 pod_ready.go:81] duration metric: took 397.170581ms for pod "kube-scheduler-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.723651   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:54.917818   23809 request.go:629] Waited for 194.098369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m02
	I0115 03:01:54.917900   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m02
	I0115 03:01:54.917911   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:54.917927   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:54.917955   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:54.923613   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:55.117552   23809 request.go:629] Waited for 193.345721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:55.117622   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:01:55.117629   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.117641   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.117671   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.121591   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:55.122149   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:01:55.122165   23809 pod_ready.go:81] duration metric: took 398.503462ms for pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:01:55.122175   23809 pod_ready.go:38] duration metric: took 3.600715297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 03:01:55.122188   23809 api_server.go:52] waiting for apiserver process to appear ...
	I0115 03:01:55.122234   23809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:01:55.136397   23809 api_server.go:72] duration metric: took 12.838062479s to wait for apiserver process to appear ...
	I0115 03:01:55.136419   23809 api_server.go:88] waiting for apiserver healthz status ...
	I0115 03:01:55.136439   23809 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0115 03:01:55.143075   23809 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0115 03:01:55.143145   23809 round_trippers.go:463] GET https://192.168.39.194:8443/version
	I0115 03:01:55.143158   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.143169   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.143182   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.144374   23809 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 03:01:55.144470   23809 api_server.go:141] control plane version: v1.28.4
	I0115 03:01:55.144486   23809 api_server.go:131] duration metric: took 8.061859ms to wait for apiserver health ...
	I0115 03:01:55.144492   23809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 03:01:55.317870   23809 request.go:629] Waited for 173.31696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:55.317925   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:55.317932   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.317942   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.317953   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.323027   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:55.329524   23809 system_pods.go:59] 17 kube-system pods found
	I0115 03:01:55.329550   23809 system_pods.go:61] "coredns-5dd5756b68-krvzt" [9b6c364f-51b5-4b1b-ae00-b4f9c5856796] Running
	I0115 03:01:55.329555   23809 system_pods.go:61] "coredns-5dd5756b68-mqq9g" [d4242838-ba09-41c8-91f0-022e0e69a3e9] Running
	I0115 03:01:55.329559   23809 system_pods.go:61] "etcd-ha-680410" [a546612f-83e1-44a2-baca-35023abbf880] Running
	I0115 03:01:55.329563   23809 system_pods.go:61] "etcd-ha-680410-m02" [f108e244-6b3c-4779-90d5-4b61742e0548] Running
	I0115 03:01:55.329567   23809 system_pods.go:61] "kindnet-jjnbw" [8904ec59-1b44-4318-a26c-38032dbdd9e4] Running
	I0115 03:01:55.329571   23809 system_pods.go:61] "kindnet-qcjzf" [78ad99a5-8d9f-43dd-a4ac-dca4593293f0] Running
	I0115 03:01:55.329575   23809 system_pods.go:61] "kube-apiserver-ha-680410" [eb42bdaa-e121-45ad-846b-b8249abf1fdc] Running
	I0115 03:01:55.329579   23809 system_pods.go:61] "kube-apiserver-ha-680410-m02" [74463f07-8412-4b08-b3f7-34883a658839] Running
	I0115 03:01:55.329585   23809 system_pods.go:61] "kube-controller-manager-ha-680410" [fd1645ee-a6f0-497b-8809-9fab65d06c02] Running
	I0115 03:01:55.329589   23809 system_pods.go:61] "kube-controller-manager-ha-680410-m02" [aa583348-cf73-4963-b8e8-08752ecc8f5d] Running
	I0115 03:01:55.329596   23809 system_pods.go:61] "kube-proxy-g2kmv" [26c4a5f1-238f-46f8-837f-692fc2c6077d] Running
	I0115 03:01:55.329599   23809 system_pods.go:61] "kube-proxy-hlbjr" [3d562b79-f315-40f2-9e01-2603e934b683] Running
	I0115 03:01:55.329603   23809 system_pods.go:61] "kube-scheduler-ha-680410" [d9cf953e-3bb8-4ac4-b881-da730bd0efb0] Running
	I0115 03:01:55.329607   23809 system_pods.go:61] "kube-scheduler-ha-680410-m02" [56d5ea38-f044-415c-95f1-39a55675c267] Running
	I0115 03:01:55.329611   23809 system_pods.go:61] "kube-vip-ha-680410" [91251d4f-f9b3-4ecf-b1c7-fca841dec620] Running
	I0115 03:01:55.329615   23809 system_pods.go:61] "kube-vip-ha-680410-m02" [7558c85b-6238-4ee2-8180-b9f1a0c1270c] Running
	I0115 03:01:55.329619   23809 system_pods.go:61] "storage-provisioner" [f82ce51d-9618-4656-91fa-3f77a60296c6] Running
	I0115 03:01:55.329626   23809 system_pods.go:74] duration metric: took 185.128562ms to wait for pod list to return data ...
	I0115 03:01:55.329632   23809 default_sa.go:34] waiting for default service account to be created ...
	I0115 03:01:55.516977   23809 request.go:629] Waited for 187.282498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/default/serviceaccounts
	I0115 03:01:55.517049   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/default/serviceaccounts
	I0115 03:01:55.517057   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.517064   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.517075   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.520445   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:55.520651   23809 default_sa.go:45] found service account: "default"
	I0115 03:01:55.520668   23809 default_sa.go:55] duration metric: took 191.029405ms for default service account to be created ...
	I0115 03:01:55.520677   23809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 03:01:55.717835   23809 request.go:629] Waited for 197.075281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:55.717916   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:01:55.717925   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.717934   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.717942   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.723474   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:01:55.728611   23809 system_pods.go:86] 17 kube-system pods found
	I0115 03:01:55.728646   23809 system_pods.go:89] "coredns-5dd5756b68-krvzt" [9b6c364f-51b5-4b1b-ae00-b4f9c5856796] Running
	I0115 03:01:55.728655   23809 system_pods.go:89] "coredns-5dd5756b68-mqq9g" [d4242838-ba09-41c8-91f0-022e0e69a3e9] Running
	I0115 03:01:55.728665   23809 system_pods.go:89] "etcd-ha-680410" [a546612f-83e1-44a2-baca-35023abbf880] Running
	I0115 03:01:55.728676   23809 system_pods.go:89] "etcd-ha-680410-m02" [f108e244-6b3c-4779-90d5-4b61742e0548] Running
	I0115 03:01:55.728686   23809 system_pods.go:89] "kindnet-jjnbw" [8904ec59-1b44-4318-a26c-38032dbdd9e4] Running
	I0115 03:01:55.728696   23809 system_pods.go:89] "kindnet-qcjzf" [78ad99a5-8d9f-43dd-a4ac-dca4593293f0] Running
	I0115 03:01:55.728703   23809 system_pods.go:89] "kube-apiserver-ha-680410" [eb42bdaa-e121-45ad-846b-b8249abf1fdc] Running
	I0115 03:01:55.728711   23809 system_pods.go:89] "kube-apiserver-ha-680410-m02" [74463f07-8412-4b08-b3f7-34883a658839] Running
	I0115 03:01:55.728718   23809 system_pods.go:89] "kube-controller-manager-ha-680410" [fd1645ee-a6f0-497b-8809-9fab65d06c02] Running
	I0115 03:01:55.728727   23809 system_pods.go:89] "kube-controller-manager-ha-680410-m02" [aa583348-cf73-4963-b8e8-08752ecc8f5d] Running
	I0115 03:01:55.728737   23809 system_pods.go:89] "kube-proxy-g2kmv" [26c4a5f1-238f-46f8-837f-692fc2c6077d] Running
	I0115 03:01:55.728745   23809 system_pods.go:89] "kube-proxy-hlbjr" [3d562b79-f315-40f2-9e01-2603e934b683] Running
	I0115 03:01:55.728752   23809 system_pods.go:89] "kube-scheduler-ha-680410" [d9cf953e-3bb8-4ac4-b881-da730bd0efb0] Running
	I0115 03:01:55.728757   23809 system_pods.go:89] "kube-scheduler-ha-680410-m02" [56d5ea38-f044-415c-95f1-39a55675c267] Running
	I0115 03:01:55.728763   23809 system_pods.go:89] "kube-vip-ha-680410" [91251d4f-f9b3-4ecf-b1c7-fca841dec620] Running
	I0115 03:01:55.728767   23809 system_pods.go:89] "kube-vip-ha-680410-m02" [7558c85b-6238-4ee2-8180-b9f1a0c1270c] Running
	I0115 03:01:55.728773   23809 system_pods.go:89] "storage-provisioner" [f82ce51d-9618-4656-91fa-3f77a60296c6] Running
	I0115 03:01:55.728779   23809 system_pods.go:126] duration metric: took 208.097098ms to wait for k8s-apps to be running ...
	I0115 03:01:55.728788   23809 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 03:01:55.728831   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:01:55.743991   23809 system_svc.go:56] duration metric: took 15.193588ms WaitForService to wait for kubelet
	I0115 03:01:55.744021   23809 kubeadm.go:576] duration metric: took 13.445691685s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 03:01:55.744043   23809 node_conditions.go:102] verifying NodePressure condition ...
	I0115 03:01:55.917453   23809 request.go:629] Waited for 173.328646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes
	I0115 03:01:55.917507   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes
	I0115 03:01:55.917512   23809 round_trippers.go:469] Request Headers:
	I0115 03:01:55.917519   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:01:55.917526   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:01:55.921092   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:01:55.921759   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:01:55.921781   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:01:55.921791   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:01:55.921795   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:01:55.921801   23809 node_conditions.go:105] duration metric: took 177.753297ms to run NodePressure ...
	I0115 03:01:55.921811   23809 start.go:240] waiting for startup goroutines ...
	I0115 03:01:55.921843   23809 start.go:254] writing updated cluster config ...
	I0115 03:01:55.924119   23809 out.go:177] 
	I0115 03:01:55.925733   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:01:55.925825   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 03:01:55.927632   23809 out.go:177] * Starting "ha-680410-m03" control-plane node in "ha-680410" cluster
	I0115 03:01:55.928919   23809 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 03:01:55.928936   23809 cache.go:56] Caching tarball of preloaded images
	I0115 03:01:55.929026   23809 preload.go:173] Found /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 03:01:55.929038   23809 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0115 03:01:55.929119   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 03:01:55.929266   23809 start.go:360] acquireMachinesLock for ha-680410-m03: {Name:mk08ca2fbfa7e17b9b93de9f109025291dd8cd1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 03:01:55.929301   23809 start.go:364] duration metric: took 19.114µs to acquireMachinesLock for "ha-680410-m03"
	I0115 03:01:55.929321   23809 start.go:93] Provisioning new machine with config: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:01:55.929452   23809 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0115 03:01:55.931024   23809 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0115 03:01:55.931106   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:01:55.931138   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:01:55.945054   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I0115 03:01:55.945473   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:01:55.945917   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:01:55.945938   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:01:55.946237   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:01:55.946425   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetMachineName
	I0115 03:01:55.946574   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:01:55.946726   23809 start.go:159] libmachine.API.Create for "ha-680410" (driver="kvm2")
	I0115 03:01:55.946753   23809 client.go:168] LocalClient.Create starting
	I0115 03:01:55.946785   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem
	I0115 03:01:55.946818   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 03:01:55.946832   23809 main.go:141] libmachine: Parsing certificate...
	I0115 03:01:55.946895   23809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem
	I0115 03:01:55.946921   23809 main.go:141] libmachine: Decoding PEM data...
	I0115 03:01:55.946939   23809 main.go:141] libmachine: Parsing certificate...
	I0115 03:01:55.946970   23809 main.go:141] libmachine: Running pre-create checks...
	I0115 03:01:55.946979   23809 main.go:141] libmachine: (ha-680410-m03) Calling .PreCreateCheck
	I0115 03:01:55.947095   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetConfigRaw
	I0115 03:01:55.947505   23809 main.go:141] libmachine: Creating machine...
	I0115 03:01:55.947519   23809 main.go:141] libmachine: (ha-680410-m03) Calling .Create
	I0115 03:01:55.947665   23809 main.go:141] libmachine: (ha-680410-m03) Creating KVM machine...
	I0115 03:01:55.949020   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found existing default KVM network
	I0115 03:01:55.949143   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found existing private KVM network mk-ha-680410
	I0115 03:01:55.949304   23809 main.go:141] libmachine: (ha-680410-m03) Setting up store path in /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03 ...
	I0115 03:01:55.949334   23809 main.go:141] libmachine: (ha-680410-m03) Building disk image from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 03:01:55.949349   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:55.949253   24660 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 03:01:55.949407   23809 main.go:141] libmachine: (ha-680410-m03) Downloading /home/jenkins/minikube-integration/17909-7685/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 03:01:56.160656   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:56.160528   24660 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa...
	I0115 03:01:56.453479   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:56.453325   24660 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/ha-680410-m03.rawdisk...
	I0115 03:01:56.453518   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Writing magic tar header
	I0115 03:01:56.453536   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Writing SSH key tar header
	I0115 03:01:56.453556   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:56.453451   24660 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03 ...
	I0115 03:01:56.453576   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03
	I0115 03:01:56.453590   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube/machines
	I0115 03:01:56.453608   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 03:01:56.453626   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17909-7685
	I0115 03:01:56.453637   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 03:01:56.453643   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home/jenkins
	I0115 03:01:56.453649   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Checking permissions on dir: /home
	I0115 03:01:56.453655   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Skipping /home - not owner
	I0115 03:01:56.453857   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03 (perms=drwx------)
	I0115 03:01:56.453892   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube/machines (perms=drwxr-xr-x)
	I0115 03:01:56.453908   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685/.minikube (perms=drwxr-xr-x)
	I0115 03:01:56.453920   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration/17909-7685 (perms=drwxrwxr-x)
	I0115 03:01:56.453931   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 03:01:56.453946   23809 main.go:141] libmachine: (ha-680410-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 03:01:56.453961   23809 main.go:141] libmachine: (ha-680410-m03) Creating domain...
	I0115 03:01:56.454692   23809 main.go:141] libmachine: (ha-680410-m03) define libvirt domain using xml: 
	I0115 03:01:56.454716   23809 main.go:141] libmachine: (ha-680410-m03) <domain type='kvm'>
	I0115 03:01:56.454727   23809 main.go:141] libmachine: (ha-680410-m03)   <name>ha-680410-m03</name>
	I0115 03:01:56.454738   23809 main.go:141] libmachine: (ha-680410-m03)   <memory unit='MiB'>2200</memory>
	I0115 03:01:56.454750   23809 main.go:141] libmachine: (ha-680410-m03)   <vcpu>2</vcpu>
	I0115 03:01:56.454755   23809 main.go:141] libmachine: (ha-680410-m03)   <features>
	I0115 03:01:56.454762   23809 main.go:141] libmachine: (ha-680410-m03)     <acpi/>
	I0115 03:01:56.454767   23809 main.go:141] libmachine: (ha-680410-m03)     <apic/>
	I0115 03:01:56.454773   23809 main.go:141] libmachine: (ha-680410-m03)     <pae/>
	I0115 03:01:56.454780   23809 main.go:141] libmachine: (ha-680410-m03)     
	I0115 03:01:56.454786   23809 main.go:141] libmachine: (ha-680410-m03)   </features>
	I0115 03:01:56.454798   23809 main.go:141] libmachine: (ha-680410-m03)   <cpu mode='host-passthrough'>
	I0115 03:01:56.454825   23809 main.go:141] libmachine: (ha-680410-m03)   
	I0115 03:01:56.454844   23809 main.go:141] libmachine: (ha-680410-m03)   </cpu>
	I0115 03:01:56.454854   23809 main.go:141] libmachine: (ha-680410-m03)   <os>
	I0115 03:01:56.454861   23809 main.go:141] libmachine: (ha-680410-m03)     <type>hvm</type>
	I0115 03:01:56.454868   23809 main.go:141] libmachine: (ha-680410-m03)     <boot dev='cdrom'/>
	I0115 03:01:56.454878   23809 main.go:141] libmachine: (ha-680410-m03)     <boot dev='hd'/>
	I0115 03:01:56.454884   23809 main.go:141] libmachine: (ha-680410-m03)     <bootmenu enable='no'/>
	I0115 03:01:56.454889   23809 main.go:141] libmachine: (ha-680410-m03)   </os>
	I0115 03:01:56.454895   23809 main.go:141] libmachine: (ha-680410-m03)   <devices>
	I0115 03:01:56.454901   23809 main.go:141] libmachine: (ha-680410-m03)     <disk type='file' device='cdrom'>
	I0115 03:01:56.454912   23809 main.go:141] libmachine: (ha-680410-m03)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/boot2docker.iso'/>
	I0115 03:01:56.454918   23809 main.go:141] libmachine: (ha-680410-m03)       <target dev='hdc' bus='scsi'/>
	I0115 03:01:56.454925   23809 main.go:141] libmachine: (ha-680410-m03)       <readonly/>
	I0115 03:01:56.454940   23809 main.go:141] libmachine: (ha-680410-m03)     </disk>
	I0115 03:01:56.454946   23809 main.go:141] libmachine: (ha-680410-m03)     <disk type='file' device='disk'>
	I0115 03:01:56.454957   23809 main.go:141] libmachine: (ha-680410-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 03:01:56.455025   23809 main.go:141] libmachine: (ha-680410-m03)       <source file='/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/ha-680410-m03.rawdisk'/>
	I0115 03:01:56.455052   23809 main.go:141] libmachine: (ha-680410-m03)       <target dev='hda' bus='virtio'/>
	I0115 03:01:56.455062   23809 main.go:141] libmachine: (ha-680410-m03)     </disk>
	I0115 03:01:56.455079   23809 main.go:141] libmachine: (ha-680410-m03)     <interface type='network'>
	I0115 03:01:56.455094   23809 main.go:141] libmachine: (ha-680410-m03)       <source network='mk-ha-680410'/>
	I0115 03:01:56.455108   23809 main.go:141] libmachine: (ha-680410-m03)       <model type='virtio'/>
	I0115 03:01:56.455121   23809 main.go:141] libmachine: (ha-680410-m03)     </interface>
	I0115 03:01:56.455133   23809 main.go:141] libmachine: (ha-680410-m03)     <interface type='network'>
	I0115 03:01:56.455147   23809 main.go:141] libmachine: (ha-680410-m03)       <source network='default'/>
	I0115 03:01:56.455158   23809 main.go:141] libmachine: (ha-680410-m03)       <model type='virtio'/>
	I0115 03:01:56.455169   23809 main.go:141] libmachine: (ha-680410-m03)     </interface>
	I0115 03:01:56.455181   23809 main.go:141] libmachine: (ha-680410-m03)     <serial type='pty'>
	I0115 03:01:56.455195   23809 main.go:141] libmachine: (ha-680410-m03)       <target port='0'/>
	I0115 03:01:56.455204   23809 main.go:141] libmachine: (ha-680410-m03)     </serial>
	I0115 03:01:56.455218   23809 main.go:141] libmachine: (ha-680410-m03)     <console type='pty'>
	I0115 03:01:56.455231   23809 main.go:141] libmachine: (ha-680410-m03)       <target type='serial' port='0'/>
	I0115 03:01:56.455251   23809 main.go:141] libmachine: (ha-680410-m03)     </console>
	I0115 03:01:56.455262   23809 main.go:141] libmachine: (ha-680410-m03)     <rng model='virtio'>
	I0115 03:01:56.455269   23809 main.go:141] libmachine: (ha-680410-m03)       <backend model='random'>/dev/random</backend>
	I0115 03:01:56.455282   23809 main.go:141] libmachine: (ha-680410-m03)     </rng>
	I0115 03:01:56.455294   23809 main.go:141] libmachine: (ha-680410-m03)     
	I0115 03:01:56.455305   23809 main.go:141] libmachine: (ha-680410-m03)     
	I0115 03:01:56.455316   23809 main.go:141] libmachine: (ha-680410-m03)   </devices>
	I0115 03:01:56.455333   23809 main.go:141] libmachine: (ha-680410-m03) </domain>
	I0115 03:01:56.455351   23809 main.go:141] libmachine: (ha-680410-m03) 
	I0115 03:01:56.462134   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:14:ed:aa in network default
	I0115 03:01:56.462672   23809 main.go:141] libmachine: (ha-680410-m03) Ensuring networks are active...
	I0115 03:01:56.462702   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:56.463363   23809 main.go:141] libmachine: (ha-680410-m03) Ensuring network default is active
	I0115 03:01:56.463743   23809 main.go:141] libmachine: (ha-680410-m03) Ensuring network mk-ha-680410 is active
	I0115 03:01:56.464065   23809 main.go:141] libmachine: (ha-680410-m03) Getting domain xml...
	I0115 03:01:56.464732   23809 main.go:141] libmachine: (ha-680410-m03) Creating domain...
	I0115 03:01:57.688561   23809 main.go:141] libmachine: (ha-680410-m03) Waiting to get IP...
	I0115 03:01:57.689332   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:57.689794   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:57.689824   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:57.689741   24660 retry.go:31] will retry after 283.330091ms: waiting for machine to come up
	I0115 03:01:57.974264   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:57.974705   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:57.974734   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:57.974651   24660 retry.go:31] will retry after 285.927902ms: waiting for machine to come up
	I0115 03:01:58.261924   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:58.262382   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:58.262412   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:58.262337   24660 retry.go:31] will retry after 338.28018ms: waiting for machine to come up
	I0115 03:01:58.601703   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:58.602144   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:58.602173   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:58.602094   24660 retry.go:31] will retry after 442.790409ms: waiting for machine to come up
	I0115 03:01:59.046303   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:59.046656   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:59.046683   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:59.046613   24660 retry.go:31] will retry after 540.553612ms: waiting for machine to come up
	I0115 03:01:59.588416   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:01:59.588733   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:01:59.588761   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:01:59.588700   24660 retry.go:31] will retry after 669.473346ms: waiting for machine to come up
	I0115 03:02:00.259398   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:00.259808   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:00.259837   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:00.259757   24660 retry.go:31] will retry after 819.907617ms: waiting for machine to come up
	I0115 03:02:01.081186   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:01.081616   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:01.081642   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:01.081592   24660 retry.go:31] will retry after 1.093402731s: waiting for machine to come up
	I0115 03:02:02.177200   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:02.177751   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:02.177781   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:02.177698   24660 retry.go:31] will retry after 1.514211711s: waiting for machine to come up
	I0115 03:02:03.694257   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:03.694687   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:03.694717   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:03.694629   24660 retry.go:31] will retry after 1.686814242s: waiting for machine to come up
	I0115 03:02:05.383342   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:05.383759   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:05.383792   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:05.383705   24660 retry.go:31] will retry after 1.928980865s: waiting for machine to come up
	I0115 03:02:07.315251   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:07.315742   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:07.315780   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:07.315683   24660 retry.go:31] will retry after 3.16632128s: waiting for machine to come up
	I0115 03:02:10.484411   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:10.484778   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:10.484801   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:10.484738   24660 retry.go:31] will retry after 3.998322995s: waiting for machine to come up
	I0115 03:02:14.484134   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:14.484565   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find current IP address of domain ha-680410-m03 in network mk-ha-680410
	I0115 03:02:14.484584   23809 main.go:141] libmachine: (ha-680410-m03) DBG | I0115 03:02:14.484490   24660 retry.go:31] will retry after 4.72777601s: waiting for machine to come up
	I0115 03:02:19.215650   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.216082   23809 main.go:141] libmachine: (ha-680410-m03) Found IP for machine: 192.168.39.182
	I0115 03:02:19.216110   23809 main.go:141] libmachine: (ha-680410-m03) Reserving static IP address...
	I0115 03:02:19.216126   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has current primary IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.216522   23809 main.go:141] libmachine: (ha-680410-m03) DBG | unable to find host DHCP lease matching {name: "ha-680410-m03", mac: "52:54:00:d4:18:a6", ip: "192.168.39.182"} in network mk-ha-680410
	I0115 03:02:19.286224   23809 main.go:141] libmachine: (ha-680410-m03) Reserved static IP address: 192.168.39.182
	I0115 03:02:19.286250   23809 main.go:141] libmachine: (ha-680410-m03) Waiting for SSH to be available...
	I0115 03:02:19.286263   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Getting to WaitForSSH function...
	I0115 03:02:19.288986   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.289426   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.289458   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.289579   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Using SSH client type: external
	I0115 03:02:19.289602   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa (-rw-------)
	I0115 03:02:19.289645   23809 main.go:141] libmachine: (ha-680410-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 03:02:19.289668   23809 main.go:141] libmachine: (ha-680410-m03) DBG | About to run SSH command:
	I0115 03:02:19.289683   23809 main.go:141] libmachine: (ha-680410-m03) DBG | exit 0
	I0115 03:02:19.386813   23809 main.go:141] libmachine: (ha-680410-m03) DBG | SSH cmd err, output: <nil>: 
	I0115 03:02:19.387022   23809 main.go:141] libmachine: (ha-680410-m03) KVM machine creation complete!
	I0115 03:02:19.387335   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetConfigRaw
	I0115 03:02:19.387861   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:19.388061   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:19.388221   23809 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 03:02:19.388233   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetState
	I0115 03:02:19.389489   23809 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 03:02:19.389505   23809 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 03:02:19.389511   23809 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 03:02:19.389518   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.391638   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.392024   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.392056   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.392217   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.392396   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.392569   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.392700   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.392878   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:19.393225   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:19.393237   23809 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 03:02:19.518200   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 03:02:19.518221   23809 main.go:141] libmachine: Detecting the provisioner...
	I0115 03:02:19.518229   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.520862   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.521192   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.521217   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.521429   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.521611   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.521739   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.521864   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.522038   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:19.522387   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:19.522399   23809 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 03:02:19.651979   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 03:02:19.652046   23809 main.go:141] libmachine: found compatible host: buildroot
	I0115 03:02:19.652060   23809 main.go:141] libmachine: Provisioning with buildroot...
	I0115 03:02:19.652075   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetMachineName
	I0115 03:02:19.652353   23809 buildroot.go:166] provisioning hostname "ha-680410-m03"
	I0115 03:02:19.652382   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetMachineName
	I0115 03:02:19.652562   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.655517   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.656044   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.656074   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.656221   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.656434   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.656622   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.656767   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.656923   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:19.657300   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:19.657314   23809 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-680410-m03 && echo "ha-680410-m03" | sudo tee /etc/hostname
	I0115 03:02:19.799601   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-680410-m03
	
	I0115 03:02:19.799640   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.802372   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.802722   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.802747   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.802921   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.803115   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.803267   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.803410   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.803550   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:19.803854   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:19.803871   23809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-680410-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-680410-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-680410-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 03:02:19.938954   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 03:02:19.938985   23809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17909-7685/.minikube CaCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17909-7685/.minikube}
	I0115 03:02:19.939004   23809 buildroot.go:174] setting up certificates
	I0115 03:02:19.939014   23809 provision.go:84] configureAuth start
	I0115 03:02:19.939027   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetMachineName
	I0115 03:02:19.939320   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:02:19.941872   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.942203   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.942234   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.942368   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.944336   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.944731   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.944756   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.944889   23809 provision.go:143] copyHostCerts
	I0115 03:02:19.944912   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 03:02:19.944940   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem, removing ...
	I0115 03:02:19.944951   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem
	I0115 03:02:19.945012   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/cert.pem (1123 bytes)
	I0115 03:02:19.945088   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 03:02:19.945105   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem, removing ...
	I0115 03:02:19.945110   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem
	I0115 03:02:19.945135   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/key.pem (1679 bytes)
	I0115 03:02:19.945176   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 03:02:19.945191   23809 exec_runner.go:144] found /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem, removing ...
	I0115 03:02:19.945199   23809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem
	I0115 03:02:19.945222   23809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17909-7685/.minikube/ca.pem (1078 bytes)
	I0115 03:02:19.945275   23809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem org=jenkins.ha-680410-m03 san=[127.0.0.1 192.168.39.182 ha-680410-m03 localhost minikube]
	I0115 03:02:19.993053   23809 provision.go:177] copyRemoteCerts
	I0115 03:02:19.993096   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 03:02:19.993112   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:19.995574   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.995947   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:19.995977   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:19.996169   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:19.996338   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:19.996489   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:19.996630   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:02:20.089335   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 03:02:20.089415   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 03:02:20.111300   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 03:02:20.111362   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0115 03:02:20.134309   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 03:02:20.134358   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 03:02:20.155039   23809 provision.go:87] duration metric: took 216.011418ms to configureAuth
	I0115 03:02:20.155064   23809 buildroot.go:189] setting minikube options for container-runtime
	I0115 03:02:20.155314   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:02:20.155333   23809 main.go:141] libmachine: Checking connection to Docker...
	I0115 03:02:20.155343   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetURL
	I0115 03:02:20.156466   23809 main.go:141] libmachine: (ha-680410-m03) DBG | Using libvirt version 6000000
	I0115 03:02:20.158686   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.159061   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.159089   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.159250   23809 main.go:141] libmachine: Docker is up and running!
	I0115 03:02:20.159263   23809 main.go:141] libmachine: Reticulating splines...
	I0115 03:02:20.159270   23809 client.go:171] duration metric: took 24.212507222s to LocalClient.Create
	I0115 03:02:20.159306   23809 start.go:167] duration metric: took 24.212577721s to libmachine.API.Create "ha-680410"
	I0115 03:02:20.159318   23809 start.go:293] postStartSetup for "ha-680410-m03" (driver="kvm2")
	I0115 03:02:20.159332   23809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 03:02:20.159362   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.159577   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 03:02:20.159598   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:20.161614   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.162001   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.162027   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.162178   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:20.162363   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.162507   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:20.162649   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:02:20.257880   23809 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 03:02:20.262302   23809 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 03:02:20.262329   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/addons for local assets ...
	I0115 03:02:20.262391   23809 filesync.go:126] Scanning /home/jenkins/minikube-integration/17909-7685/.minikube/files for local assets ...
	I0115 03:02:20.262459   23809 filesync.go:149] local asset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> 149542.pem in /etc/ssl/certs
	I0115 03:02:20.262469   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /etc/ssl/certs/149542.pem
	I0115 03:02:20.262549   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 03:02:20.271448   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /etc/ssl/certs/149542.pem (1708 bytes)
	I0115 03:02:20.292827   23809 start.go:296] duration metric: took 133.498451ms for postStartSetup
	I0115 03:02:20.292874   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetConfigRaw
	I0115 03:02:20.293433   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:02:20.296020   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.296434   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.296467   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.296830   23809 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/config.json ...
	I0115 03:02:20.296997   23809 start.go:128] duration metric: took 24.36753448s to createHost
	I0115 03:02:20.297017   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:20.299002   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.299316   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.299345   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.299472   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:20.299647   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.299773   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.299869   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:20.300023   23809 main.go:141] libmachine: Using SSH client type: native
	I0115 03:02:20.300463   23809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0115 03:02:20.300478   23809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 03:02:20.431976   23809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705287740.414243507
	
	I0115 03:02:20.431997   23809 fix.go:216] guest clock: 1705287740.414243507
	I0115 03:02:20.432005   23809 fix.go:229] Guest: 2024-01-15 03:02:20.414243507 +0000 UTC Remote: 2024-01-15 03:02:20.297006622 +0000 UTC m=+232.440540762 (delta=117.236885ms)
	I0115 03:02:20.432022   23809 fix.go:200] guest clock delta is within tolerance: 117.236885ms
	I0115 03:02:20.432029   23809 start.go:83] releasing machines lock for "ha-680410-m03", held for 24.502717337s
	I0115 03:02:20.432055   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.432293   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:02:20.434946   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.435329   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.435357   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.437924   23809 out.go:177] * Found network options:
	I0115 03:02:20.439485   23809 out.go:177]   - NO_PROXY=192.168.39.194,192.168.39.178
	W0115 03:02:20.440783   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	W0115 03:02:20.440802   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 03:02:20.440814   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.441345   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.441521   23809 main.go:141] libmachine: (ha-680410-m03) Calling .DriverName
	I0115 03:02:20.441615   23809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 03:02:20.441651   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	W0115 03:02:20.441747   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	W0115 03:02:20.441763   23809 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 03:02:20.441831   23809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 03:02:20.441852   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHHostname
	I0115 03:02:20.444177   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.444569   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.444602   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.444624   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.444804   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:20.444970   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.445089   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:20.445110   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:20.445119   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:20.445263   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHPort
	I0115 03:02:20.445274   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	I0115 03:02:20.445375   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHKeyPath
	I0115 03:02:20.445494   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetSSHUsername
	I0115 03:02:20.445606   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410-m03/id_rsa Username:docker}
	W0115 03:02:20.565161   23809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 03:02:20.565238   23809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 03:02:20.580337   23809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 03:02:20.580364   23809 start.go:494] detecting cgroup driver to use...
	I0115 03:02:20.580425   23809 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 03:02:20.612466   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 03:02:20.624298   23809 docker.go:217] disabling cri-docker service (if available) ...
	I0115 03:02:20.624354   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 03:02:20.637658   23809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 03:02:20.650766   23809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 03:02:20.764448   23809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 03:02:20.880847   23809 docker.go:233] disabling docker service ...
	I0115 03:02:20.880908   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 03:02:20.896381   23809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 03:02:20.910450   23809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 03:02:21.015962   23809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 03:02:21.130461   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 03:02:21.143499   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 03:02:21.162509   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 03:02:21.173746   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 03:02:21.184322   23809 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 03:02:21.184381   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 03:02:21.194256   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 03:02:21.203818   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 03:02:21.212928   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 03:02:21.222770   23809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 03:02:21.232037   23809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 03:02:21.240956   23809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 03:02:21.248671   23809 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 03:02:21.248732   23809 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 03:02:21.261150   23809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 03:02:21.269189   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:02:21.385202   23809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 03:02:21.414976   23809 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 03:02:21.415078   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 03:02:21.420123   23809 retry.go:31] will retry after 1.048823659s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0115 03:02:22.469389   23809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 03:02:22.474867   23809 start.go:562] Will wait 60s for crictl version
	I0115 03:02:22.474916   23809 ssh_runner.go:195] Run: which crictl
	I0115 03:02:22.478743   23809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 03:02:22.524924   23809 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0115 03:02:22.525022   23809 ssh_runner.go:195] Run: containerd --version
	I0115 03:02:22.558246   23809 ssh_runner.go:195] Run: containerd --version
	I0115 03:02:22.593483   23809 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0115 03:02:22.594868   23809 out.go:177]   - env NO_PROXY=192.168.39.194
	I0115 03:02:22.596333   23809 out.go:177]   - env NO_PROXY=192.168.39.194,192.168.39.178
	I0115 03:02:22.597546   23809 main.go:141] libmachine: (ha-680410-m03) Calling .GetIP
	I0115 03:02:22.600264   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:22.600702   23809 main.go:141] libmachine: (ha-680410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:18:a6", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 04:02:11 +0000 UTC Type:0 Mac:52:54:00:d4:18:a6 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-680410-m03 Clientid:01:52:54:00:d4:18:a6}
	I0115 03:02:22.600720   23809 main.go:141] libmachine: (ha-680410-m03) DBG | domain ha-680410-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d4:18:a6 in network mk-ha-680410
	I0115 03:02:22.600922   23809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 03:02:22.610566   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 03:02:22.625242   23809 mustload.go:65] Loading cluster: ha-680410
	I0115 03:02:22.625522   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:02:22.625910   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:02:22.625950   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:02:22.642170   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0115 03:02:22.642590   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:02:22.643064   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:02:22.643091   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:02:22.643424   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:02:22.643614   23809 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:02:22.645297   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:02:22.645661   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:02:22.645705   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:02:22.661250   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0115 03:02:22.661663   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:02:22.662152   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:02:22.662172   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:02:22.662461   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:02:22.662626   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:02:22.662785   23809 certs.go:68] Setting up /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410 for IP: 192.168.39.182
	I0115 03:02:22.662795   23809 certs.go:194] generating shared ca certs ...
	I0115 03:02:22.662806   23809 certs.go:226] acquiring lock for ca certs: {Name:mk4b44e68f01694cff17056fe1b88a9d17c4d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 03:02:22.662920   23809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key
	I0115 03:02:22.662954   23809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key
	I0115 03:02:22.662963   23809 certs.go:256] generating profile certs ...
	I0115 03:02:22.663026   23809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key
	I0115 03:02:22.663049   23809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.7ea7d8a4
	I0115 03:02:22.663060   23809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.7ea7d8a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194 192.168.39.178 192.168.39.182 192.168.39.254]
	I0115 03:02:22.879349   23809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.7ea7d8a4 ...
	I0115 03:02:22.879379   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.7ea7d8a4: {Name:mk2126f339e3e0824b456c72fb72c0e7f9970d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 03:02:22.879575   23809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.7ea7d8a4 ...
	I0115 03:02:22.879589   23809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.7ea7d8a4: {Name:mk74bb1ea4d6a89296545545641cdd0e1c436257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 03:02:22.879688   23809 certs.go:381] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt.7ea7d8a4 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt
	I0115 03:02:22.879861   23809 certs.go:385] copying /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key.7ea7d8a4 -> /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key
	I0115 03:02:22.880054   23809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key
	I0115 03:02:22.880078   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 03:02:22.880105   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 03:02:22.880128   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 03:02:22.880149   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 03:02:22.880170   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 03:02:22.880194   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 03:02:22.880220   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 03:02:22.880241   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 03:02:22.880310   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem (1338 bytes)
	W0115 03:02:22.880354   23809 certs.go:480] ignoring /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954_empty.pem, impossibly tiny 0 bytes
	I0115 03:02:22.880365   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 03:02:22.880387   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/ca.pem (1078 bytes)
	I0115 03:02:22.880412   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/cert.pem (1123 bytes)
	I0115 03:02:22.880440   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/key.pem (1679 bytes)
	I0115 03:02:22.880483   23809 certs.go:484] found cert: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem (1708 bytes)
	I0115 03:02:22.880510   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem -> /usr/share/ca-certificates/149542.pem
	I0115 03:02:22.880530   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 03:02:22.880548   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem -> /usr/share/ca-certificates/14954.pem
	I0115 03:02:22.880582   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:02:22.883630   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:02:22.884009   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:02:22.884031   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:02:22.884231   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:02:22.884433   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:02:22.884579   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:02:22.884729   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:02:22.963683   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0115 03:02:22.969225   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0115 03:02:22.981102   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0115 03:02:22.985652   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0115 03:02:22.997376   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0115 03:02:23.002426   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0115 03:02:23.014305   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0115 03:02:23.018135   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0115 03:02:23.031106   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0115 03:02:23.037022   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0115 03:02:23.048982   23809 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0115 03:02:23.052710   23809 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0115 03:02:23.063349   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 03:02:23.086374   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 03:02:23.108596   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 03:02:23.129896   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 03:02:23.151909   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0115 03:02:23.174825   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 03:02:23.197430   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 03:02:23.220501   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 03:02:23.244241   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/ssl/certs/149542.pem --> /usr/share/ca-certificates/149542.pem (1708 bytes)
	I0115 03:02:23.266017   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 03:02:23.288669   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/certs/14954.pem --> /usr/share/ca-certificates/14954.pem (1338 bytes)
	I0115 03:02:23.312913   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0115 03:02:23.328701   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0115 03:02:23.343776   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0115 03:02:23.359295   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0115 03:02:23.375874   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0115 03:02:23.390951   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0115 03:02:23.406572   23809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0115 03:02:23.421886   23809 ssh_runner.go:195] Run: openssl version
	I0115 03:02:23.426890   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149542.pem && ln -fs /usr/share/ca-certificates/149542.pem /etc/ssl/certs/149542.pem"
	I0115 03:02:23.437992   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149542.pem
	I0115 03:02:23.442428   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 15 02:54 /usr/share/ca-certificates/149542.pem
	I0115 03:02:23.442471   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149542.pem
	I0115 03:02:23.447720   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149542.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 03:02:23.457428   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 03:02:23.467248   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 03:02:23.471376   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 15 02:46 /usr/share/ca-certificates/minikubeCA.pem
	I0115 03:02:23.471439   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 03:02:23.476583   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 03:02:23.486959   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14954.pem && ln -fs /usr/share/ca-certificates/14954.pem /etc/ssl/certs/14954.pem"
	I0115 03:02:23.497899   23809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14954.pem
	I0115 03:02:23.502017   23809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:54 /usr/share/ca-certificates/14954.pem
	I0115 03:02:23.502059   23809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14954.pem
	I0115 03:02:23.507184   23809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14954.pem /etc/ssl/certs/51391683.0"
	I0115 03:02:23.517561   23809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0115 03:02:23.521685   23809 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0115 03:02:23.521732   23809 kubeadm.go:928] updating node {m03 192.168.39.182 8443 v1.28.4 containerd true true} ...
	I0115 03:02:23.521807   23809 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-680410-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0115 03:02:23.521830   23809 kube-vip.go:101] generating kube-vip config ...
	I0115 03:02:23.521858   23809 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_ddns
	      value: "false"
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.6.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0115 03:02:23.521886   23809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 03:02:23.530677   23809 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0115 03:02:23.530729   23809 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0115 03:02:23.540741   23809 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0115 03:02:23.540763   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 03:02:23.540763   23809 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0115 03:02:23.540781   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 03:02:23.540741   23809 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0115 03:02:23.540830   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 03:02:23.540876   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:02:23.540832   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 03:02:23.548286   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0115 03:02:23.548311   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0115 03:02:23.548313   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0115 03:02:23.548328   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0115 03:02:23.568645   23809 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 03:02:23.568728   23809 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 03:02:23.631336   23809 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0115 03:02:23.631381   23809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0115 03:02:24.439936   23809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0115 03:02:24.448798   23809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0115 03:02:24.464478   23809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 03:02:24.480321   23809 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1265 bytes)
	I0115 03:02:24.495763   23809 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0115 03:02:24.499260   23809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 03:02:24.510148   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:02:24.615593   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 03:02:24.629982   23809 host.go:66] Checking if "ha-680410" exists ...
	I0115 03:02:24.630417   23809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:02:24.630468   23809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:02:24.646588   23809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0115 03:02:24.647008   23809 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:02:24.647557   23809 main.go:141] libmachine: Using API Version  1
	I0115 03:02:24.647578   23809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:02:24.647969   23809 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:02:24.648148   23809 main.go:141] libmachine: (ha-680410) Calling .DriverName
	I0115 03:02:24.648282   23809 start.go:316] joinCluster: &{Name:ha-680410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-680410 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 03:02:24.648428   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 03:02:24.648447   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHHostname
	I0115 03:02:24.651665   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:02:24.652159   23809 main.go:141] libmachine: (ha-680410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e1:70", ip: ""} in network mk-ha-680410: {Iface:virbr1 ExpiryTime:2024-01-15 03:58:43 +0000 UTC Type:0 Mac:52:54:00:f3:e1:70 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-680410 Clientid:01:52:54:00:f3:e1:70}
	I0115 03:02:24.652186   23809 main.go:141] libmachine: (ha-680410) DBG | domain ha-680410 has defined IP address 192.168.39.194 and MAC address 52:54:00:f3:e1:70 in network mk-ha-680410
	I0115 03:02:24.652377   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHPort
	I0115 03:02:24.652560   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHKeyPath
	I0115 03:02:24.652735   23809 main.go:141] libmachine: (ha-680410) Calling .GetSSHUsername
	I0115 03:02:24.652931   23809 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/ha-680410/id_rsa Username:docker}
	I0115 03:02:24.852299   23809 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:02:24.852345   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p6qcyi.4ds32fdjmsrqfkef --discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-680410-m03 --control-plane --apiserver-advertise-address=192.168.39.182 --apiserver-bind-port=8443"
	I0115 03:02:50.660804   23809 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p6qcyi.4ds32fdjmsrqfkef --discovery-token-ca-cert-hash sha256:8ea6922acf4f080ab85106df920fd454d942c8bd0ccb8c08ccc582c2701539d8 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-680410-m03 --control-plane --apiserver-advertise-address=192.168.39.182 --apiserver-bind-port=8443": (25.808434304s)
	I0115 03:02:50.660845   23809 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 03:02:51.152135   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-680410-m03 minikube.k8s.io/updated_at=2024_01_15T03_02_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b minikube.k8s.io/name=ha-680410 minikube.k8s.io/primary=false
	I0115 03:02:51.295674   23809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-680410-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0115 03:02:51.438057   23809 start.go:318] duration metric: took 26.789770985s to joinCluster
	I0115 03:02:51.438129   23809 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 03:02:51.439768   23809 out.go:177] * Verifying Kubernetes components...
	I0115 03:02:51.438624   23809 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:02:51.441000   23809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 03:02:51.637150   23809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0115 03:02:51.654178   23809 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 03:02:51.654506   23809 kapi.go:59] client config for ha-680410: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ha-680410/client.key", CAFile:"/home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0115 03:02:51.654572   23809 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.194:8443
	I0115 03:02:51.654805   23809 node_ready.go:35] waiting up to 6m0s for node "ha-680410-m03" to be "Ready" ...
	I0115 03:02:51.654887   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:51.654897   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:51.654906   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:51.654916   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:51.658567   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:52.155669   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:52.155694   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:52.155704   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:52.155714   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:52.160286   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:52.655736   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:52.655759   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:52.655770   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:52.655779   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:52.660900   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:02:53.155956   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:53.155975   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:53.155982   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:53.155988   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:53.168792   23809 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0115 03:02:53.655545   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:53.655573   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:53.655585   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:53.655594   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:53.659816   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:53.660674   23809 node_ready.go:53] node "ha-680410-m03" has status "Ready":"False"
	I0115 03:02:54.155050   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:54.155080   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:54.155092   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:54.155101   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:54.158578   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:54.655296   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:54.655315   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:54.655323   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:54.655329   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:54.659140   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:55.155044   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:55.155063   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:55.155070   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:55.155076   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:55.158573   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:55.655042   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:55.655069   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:55.655080   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:55.655089   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:55.659840   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:56.155187   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:56.155218   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:56.155230   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:56.155240   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:56.163530   23809 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0115 03:02:56.164544   23809 node_ready.go:53] node "ha-680410-m03" has status "Ready":"False"
	I0115 03:02:56.655069   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:56.655092   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:56.655100   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:56.655106   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:56.660151   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:02:57.155164   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:57.155187   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:57.155195   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:57.155201   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:57.158602   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:57.655954   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:57.655978   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:57.655989   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:57.655998   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:57.665383   23809 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0115 03:02:58.155431   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:58.155456   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.155469   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.155477   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.159422   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:58.655156   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:58.655179   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.655187   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.655193   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.659921   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:58.660632   23809 node_ready.go:49] node "ha-680410-m03" has status "Ready":"True"
	I0115 03:02:58.660652   23809 node_ready.go:38] duration metric: took 7.00583143s for node "ha-680410-m03" to be "Ready" ...
	I0115 03:02:58.660659   23809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 03:02:58.660727   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:02:58.660738   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.660745   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.660751   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.669926   23809 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0115 03:02:58.677850   23809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.677927   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-krvzt
	I0115 03:02:58.677938   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.677948   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.677958   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.681833   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:58.682521   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:02:58.682535   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.682542   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.682550   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.687066   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:02:58.687700   23809 pod_ready.go:92] pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace has status "Ready":"True"
	I0115 03:02:58.687721   23809 pod_ready.go:81] duration metric: took 9.848075ms for pod "coredns-5dd5756b68-krvzt" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.687732   23809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.687782   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mqq9g
	I0115 03:02:58.687791   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.687797   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.687803   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.690914   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:58.691955   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:02:58.691970   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.691980   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.691988   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.695265   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:58.695684   23809 pod_ready.go:92] pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace has status "Ready":"True"
	I0115 03:02:58.695703   23809 pod_ready.go:81] duration metric: took 7.963099ms for pod "coredns-5dd5756b68-mqq9g" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.695714   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.695762   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410
	I0115 03:02:58.695771   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.695778   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.695784   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.698607   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:02:58.699061   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:02:58.699073   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.699080   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.699086   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.701654   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:02:58.702116   23809 pod_ready.go:92] pod "etcd-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:02:58.702131   23809 pod_ready.go:81] duration metric: took 6.409578ms for pod "etcd-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.702141   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.702190   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m02
	I0115 03:02:58.702201   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.702212   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.702224   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.704794   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:02:58.705365   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:02:58.705384   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.705395   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.705406   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.707989   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:02:58.708568   23809 pod_ready.go:92] pod "etcd-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:02:58.708583   23809 pod_ready.go:81] duration metric: took 6.433746ms for pod "etcd-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.708590   23809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:02:58.855920   23809 request.go:629] Waited for 147.283954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:02:58.855983   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:02:58.855988   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:58.855995   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:58.856001   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:58.859977   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:59.056067   23809 request.go:629] Waited for 195.181513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.056122   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.056127   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.056134   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.056141   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.059655   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:59.255649   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:02:59.255680   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.255689   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.255695   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.259660   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:59.455741   23809 request.go:629] Waited for 195.394895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.455796   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.455801   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.455809   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.455817   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.459566   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:02:59.709205   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:02:59.709232   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.709243   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.709251   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.717699   23809 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0115 03:02:59.856077   23809 request.go:629] Waited for 137.322858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.856154   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:02:59.856164   23809 round_trippers.go:469] Request Headers:
	I0115 03:02:59.856174   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:02:59.856188   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:02:59.860088   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:00.209758   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:00.209778   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:00.209786   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:00.209799   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:00.213287   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:00.255460   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:00.255485   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:00.255493   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:00.255499   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:00.259529   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:00.709379   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:00.709400   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:00.709408   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:00.709414   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:00.713365   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:00.714455   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:00.714475   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:00.714486   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:00.714496   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:00.717621   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:00.718148   23809 pod_ready.go:102] pod "etcd-ha-680410-m03" in "kube-system" namespace has status "Ready":"False"
	I0115 03:03:01.208972   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:01.209002   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:01.209014   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:01.209023   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:01.212393   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:01.213249   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:01.213263   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:01.213270   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:01.213276   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:01.216292   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:03:01.709473   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:01.709493   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:01.709501   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:01.709507   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:01.713625   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:01.714330   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:01.714346   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:01.714354   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:01.714359   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:01.717634   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:02.209600   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:02.209623   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:02.209634   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:02.209643   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:02.213453   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:02.213990   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:02.214006   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:02.214016   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:02.214024   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:02.217508   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:02.709454   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:02.709472   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:02.709480   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:02.709487   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:02.713600   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:02.714525   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:02.714538   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:02.714545   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:02.714551   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:02.717946   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:02.718489   23809 pod_ready.go:102] pod "etcd-ha-680410-m03" in "kube-system" namespace has status "Ready":"False"
	I0115 03:03:03.209066   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:03.209083   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:03.209091   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:03.209098   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:03.212511   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:03.213171   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:03.213186   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:03.213193   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:03.213198   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:03.216363   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:03.709234   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:03.709253   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:03.709261   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:03.709266   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:03.714028   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:03.715385   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:03.715419   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:03.715431   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:03.715441   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:03.718519   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.209296   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:04.209317   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.209325   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.209331   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.213015   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.213947   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:04.213962   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.213972   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.213981   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.217508   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.709174   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-680410-m03
	I0115 03:03:04.709195   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.709203   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.709209   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.712969   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.714285   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:04.714305   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.714315   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.714324   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.718669   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:04.720745   23809 pod_ready.go:92] pod "etcd-ha-680410-m03" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:04.720764   23809 pod_ready.go:81] duration metric: took 6.01216922s for pod "etcd-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.720786   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.720848   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410
	I0115 03:03:04.720861   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.720868   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.720873   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.724905   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:04.725880   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:04.725899   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.725910   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.725920   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.731046   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:03:04.732089   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:04.732114   23809 pod_ready.go:81] duration metric: took 11.320601ms for pod "kube-apiserver-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.732126   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.732196   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m02
	I0115 03:03:04.732206   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.732215   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.732226   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.735489   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.736211   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:04.736227   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.736237   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.736246   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.739273   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:04.739917   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:04.739934   23809 pod_ready.go:81] duration metric: took 7.79758ms for pod "kube-apiserver-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.739945   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:04.855235   23809 request.go:629] Waited for 115.213898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m03
	I0115 03:03:04.855337   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-680410-m03
	I0115 03:03:04.855351   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:04.855362   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:04.855375   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:04.860116   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:05.055822   23809 request.go:629] Waited for 194.866101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:05.055871   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:05.055876   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.055885   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.055891   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.059316   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:05.060070   23809 pod_ready.go:92] pod "kube-apiserver-ha-680410-m03" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:05.060090   23809 pod_ready.go:81] duration metric: took 320.131298ms for pod "kube-apiserver-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.060100   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.256162   23809 request.go:629] Waited for 195.993718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410
	I0115 03:03:05.256224   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410
	I0115 03:03:05.256234   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.256245   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.256257   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.259867   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:05.456024   23809 request.go:629] Waited for 195.357918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:05.456103   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:05.456110   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.456118   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.456124   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.465666   23809 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0115 03:03:05.466604   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:05.466624   23809 pod_ready.go:81] duration metric: took 406.515979ms for pod "kube-controller-manager-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.466638   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.655765   23809 request.go:629] Waited for 189.054178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m02
	I0115 03:03:05.655856   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m02
	I0115 03:03:05.655867   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.655878   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.655891   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.659773   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:05.856199   23809 request.go:629] Waited for 195.368131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:05.856271   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:05.856282   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:05.856290   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:05.856298   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:05.860663   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:05.861676   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:05.861698   23809 pod_ready.go:81] duration metric: took 395.047492ms for pod "kube-controller-manager-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:05.861710   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.056149   23809 request.go:629] Waited for 194.375054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m03
	I0115 03:03:06.056256   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-680410-m03
	I0115 03:03:06.056267   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.056277   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.056286   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.062639   23809 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0115 03:03:06.255810   23809 request.go:629] Waited for 192.333823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:06.255864   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:06.255871   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.255902   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.255920   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.259723   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:06.260470   23809 pod_ready.go:92] pod "kube-controller-manager-ha-680410-m03" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:06.260489   23809 pod_ready.go:81] duration metric: took 398.772097ms for pod "kube-controller-manager-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.260497   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2kmv" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.456117   23809 request.go:629] Waited for 195.537538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2kmv
	I0115 03:03:06.456678   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2kmv
	I0115 03:03:06.456701   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.456715   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.456728   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.467926   23809 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0115 03:03:06.656087   23809 request.go:629] Waited for 187.357695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:06.656154   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:06.656160   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.656167   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.656176   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.660105   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:06.660948   23809 pod_ready.go:92] pod "kube-proxy-g2kmv" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:06.660970   23809 pod_ready.go:81] duration metric: took 400.466795ms for pod "kube-proxy-g2kmv" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.660982   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hlbjr" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:06.856031   23809 request.go:629] Waited for 194.976379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hlbjr
	I0115 03:03:06.856080   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hlbjr
	I0115 03:03:06.856085   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:06.856093   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:06.856102   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:06.859524   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:07.055617   23809 request.go:629] Waited for 195.176224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:07.055716   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:07.055732   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.055740   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.055749   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.059792   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:07.060769   23809 pod_ready.go:92] pod "kube-proxy-hlbjr" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:07.060788   23809 pod_ready.go:81] duration metric: took 399.798374ms for pod "kube-proxy-hlbjr" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.060801   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zfn27" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.255808   23809 request.go:629] Waited for 194.929509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfn27
	I0115 03:03:07.255891   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfn27
	I0115 03:03:07.255902   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.255910   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.255916   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.259541   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:07.455855   23809 request.go:629] Waited for 195.340547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:07.455928   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:07.455938   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.455946   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.455954   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.459599   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:07.460101   23809 pod_ready.go:92] pod "kube-proxy-zfn27" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:07.460119   23809 pod_ready.go:81] duration metric: took 399.30478ms for pod "kube-proxy-zfn27" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.460132   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.655680   23809 request.go:629] Waited for 195.498701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410
	I0115 03:03:07.655748   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410
	I0115 03:03:07.655761   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.655773   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.655800   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.661613   23809 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 03:03:07.855559   23809 request.go:629] Waited for 193.344879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:07.855645   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410
	I0115 03:03:07.855653   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:07.855661   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:07.855667   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:07.859335   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:07.860134   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:07.860151   23809 pod_ready.go:81] duration metric: took 400.012975ms for pod "kube-scheduler-ha-680410" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:07.860159   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:08.055960   23809 request.go:629] Waited for 195.744784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m02
	I0115 03:03:08.056037   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m02
	I0115 03:03:08.056042   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.056050   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.056059   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.059487   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:08.256040   23809 request.go:629] Waited for 195.913618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:08.256098   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m02
	I0115 03:03:08.256116   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.256124   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.256132   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.260329   23809 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 03:03:08.260758   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:08.260773   23809 pod_ready.go:81] duration metric: took 400.608211ms for pod "kube-scheduler-ha-680410-m02" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:08.260781   23809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:08.455881   23809 request.go:629] Waited for 195.041096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m03
	I0115 03:03:08.455959   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-680410-m03
	I0115 03:03:08.455967   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.455975   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.455981   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.459719   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:08.655658   23809 request.go:629] Waited for 195.368476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:08.655758   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes/ha-680410-m03
	I0115 03:03:08.655768   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.655778   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.655788   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.659538   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:08.660203   23809 pod_ready.go:92] pod "kube-scheduler-ha-680410-m03" in "kube-system" namespace has status "Ready":"True"
	I0115 03:03:08.660219   23809 pod_ready.go:81] duration metric: took 399.431163ms for pod "kube-scheduler-ha-680410-m03" in "kube-system" namespace to be "Ready" ...
	I0115 03:03:08.660228   23809 pod_ready.go:38] duration metric: took 9.999559937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 03:03:08.660241   23809 api_server.go:52] waiting for apiserver process to appear ...
	I0115 03:03:08.660294   23809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:03:08.676242   23809 api_server.go:72] duration metric: took 17.238083275s to wait for apiserver process to appear ...
	I0115 03:03:08.676264   23809 api_server.go:88] waiting for apiserver healthz status ...
	I0115 03:03:08.676285   23809 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0115 03:03:08.681918   23809 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0115 03:03:08.681988   23809 round_trippers.go:463] GET https://192.168.39.194:8443/version
	I0115 03:03:08.681996   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.682004   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.682010   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.684711   23809 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 03:03:08.684941   23809 api_server.go:141] control plane version: v1.28.4
	I0115 03:03:08.684959   23809 api_server.go:131] duration metric: took 8.687082ms to wait for apiserver health ...
	I0115 03:03:08.684969   23809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 03:03:08.855274   23809 request.go:629] Waited for 170.245399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:03:08.855352   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:03:08.855361   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:08.855369   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:08.855378   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:08.862812   23809 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0115 03:03:08.869268   23809 system_pods.go:59] 24 kube-system pods found
	I0115 03:03:08.869292   23809 system_pods.go:61] "coredns-5dd5756b68-krvzt" [9b6c364f-51b5-4b1b-ae00-b4f9c5856796] Running
	I0115 03:03:08.869299   23809 system_pods.go:61] "coredns-5dd5756b68-mqq9g" [d4242838-ba09-41c8-91f0-022e0e69a3e9] Running
	I0115 03:03:08.869305   23809 system_pods.go:61] "etcd-ha-680410" [a546612f-83e1-44a2-baca-35023abbf880] Running
	I0115 03:03:08.869311   23809 system_pods.go:61] "etcd-ha-680410-m02" [f108e244-6b3c-4779-90d5-4b61742e0548] Running
	I0115 03:03:08.869322   23809 system_pods.go:61] "etcd-ha-680410-m03" [6b1380a5-d4d8-419a-a84a-b416e3985c86] Running
	I0115 03:03:08.869329   23809 system_pods.go:61] "kindnet-hw4rx" [78ebda65-da09-4808-86d3-2684faf1de94] Running
	I0115 03:03:08.869335   23809 system_pods.go:61] "kindnet-jjnbw" [8904ec59-1b44-4318-a26c-38032dbdd9e4] Running
	I0115 03:03:08.869342   23809 system_pods.go:61] "kindnet-qcjzf" [78ad99a5-8d9f-43dd-a4ac-dca4593293f0] Running
	I0115 03:03:08.869352   23809 system_pods.go:61] "kube-apiserver-ha-680410" [eb42bdaa-e121-45ad-846b-b8249abf1fdc] Running
	I0115 03:03:08.869359   23809 system_pods.go:61] "kube-apiserver-ha-680410-m02" [74463f07-8412-4b08-b3f7-34883a658839] Running
	I0115 03:03:08.869366   23809 system_pods.go:61] "kube-apiserver-ha-680410-m03" [8ba403df-4b0c-4fc3-a48e-457fab2a2f3e] Running
	I0115 03:03:08.869377   23809 system_pods.go:61] "kube-controller-manager-ha-680410" [fd1645ee-a6f0-497b-8809-9fab65d06c02] Running
	I0115 03:03:08.869386   23809 system_pods.go:61] "kube-controller-manager-ha-680410-m02" [aa583348-cf73-4963-b8e8-08752ecc8f5d] Running
	I0115 03:03:08.869396   23809 system_pods.go:61] "kube-controller-manager-ha-680410-m03" [951e61a0-bedd-4a46-8681-a2575b15ae24] Running
	I0115 03:03:08.869406   23809 system_pods.go:61] "kube-proxy-g2kmv" [26c4a5f1-238f-46f8-837f-692fc2c6077d] Running
	I0115 03:03:08.869413   23809 system_pods.go:61] "kube-proxy-hlbjr" [3d562b79-f315-40f2-9e01-2603e934b683] Running
	I0115 03:03:08.869423   23809 system_pods.go:61] "kube-proxy-zfn27" [91166a3e-cfbd-4a52-9816-1be24750df7d] Running
	I0115 03:03:08.869429   23809 system_pods.go:61] "kube-scheduler-ha-680410" [d9cf953e-3bb8-4ac4-b881-da730bd0efb0] Running
	I0115 03:03:08.869439   23809 system_pods.go:61] "kube-scheduler-ha-680410-m02" [56d5ea38-f044-415c-95f1-39a55675c267] Running
	I0115 03:03:08.869446   23809 system_pods.go:61] "kube-scheduler-ha-680410-m03" [cc4bebd0-a36f-4b3c-8783-227bc21a649b] Running
	I0115 03:03:08.869456   23809 system_pods.go:61] "kube-vip-ha-680410" [91251d4f-f9b3-4ecf-b1c7-fca841dec620] Running
	I0115 03:03:08.869463   23809 system_pods.go:61] "kube-vip-ha-680410-m02" [7558c85b-6238-4ee2-8180-b9f1a0c1270c] Running
	I0115 03:03:08.869478   23809 system_pods.go:61] "kube-vip-ha-680410-m03" [5c179856-a694-4cfe-a0fa-2aefaae1c9f4] Running
	I0115 03:03:08.869487   23809 system_pods.go:61] "storage-provisioner" [f82ce51d-9618-4656-91fa-3f77a60296c6] Running
	I0115 03:03:08.869495   23809 system_pods.go:74] duration metric: took 184.518482ms to wait for pod list to return data ...
	I0115 03:03:08.869508   23809 default_sa.go:34] waiting for default service account to be created ...
	I0115 03:03:09.055908   23809 request.go:629] Waited for 186.327345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/default/serviceaccounts
	I0115 03:03:09.055977   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/default/serviceaccounts
	I0115 03:03:09.055982   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:09.055990   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:09.055997   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:09.059760   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:09.059872   23809 default_sa.go:45] found service account: "default"
	I0115 03:03:09.059886   23809 default_sa.go:55] duration metric: took 190.370286ms for default service account to be created ...
	I0115 03:03:09.059893   23809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 03:03:09.256244   23809 request.go:629] Waited for 196.26197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:03:09.256301   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/namespaces/kube-system/pods
	I0115 03:03:09.256305   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:09.256312   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:09.256318   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:09.264404   23809 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0115 03:03:09.271148   23809 system_pods.go:86] 24 kube-system pods found
	I0115 03:03:09.271168   23809 system_pods.go:89] "coredns-5dd5756b68-krvzt" [9b6c364f-51b5-4b1b-ae00-b4f9c5856796] Running
	I0115 03:03:09.271174   23809 system_pods.go:89] "coredns-5dd5756b68-mqq9g" [d4242838-ba09-41c8-91f0-022e0e69a3e9] Running
	I0115 03:03:09.271179   23809 system_pods.go:89] "etcd-ha-680410" [a546612f-83e1-44a2-baca-35023abbf880] Running
	I0115 03:03:09.271185   23809 system_pods.go:89] "etcd-ha-680410-m02" [f108e244-6b3c-4779-90d5-4b61742e0548] Running
	I0115 03:03:09.271199   23809 system_pods.go:89] "etcd-ha-680410-m03" [6b1380a5-d4d8-419a-a84a-b416e3985c86] Running
	I0115 03:03:09.271206   23809 system_pods.go:89] "kindnet-hw4rx" [78ebda65-da09-4808-86d3-2684faf1de94] Running
	I0115 03:03:09.271214   23809 system_pods.go:89] "kindnet-jjnbw" [8904ec59-1b44-4318-a26c-38032dbdd9e4] Running
	I0115 03:03:09.271220   23809 system_pods.go:89] "kindnet-qcjzf" [78ad99a5-8d9f-43dd-a4ac-dca4593293f0] Running
	I0115 03:03:09.271224   23809 system_pods.go:89] "kube-apiserver-ha-680410" [eb42bdaa-e121-45ad-846b-b8249abf1fdc] Running
	I0115 03:03:09.271233   23809 system_pods.go:89] "kube-apiserver-ha-680410-m02" [74463f07-8412-4b08-b3f7-34883a658839] Running
	I0115 03:03:09.271239   23809 system_pods.go:89] "kube-apiserver-ha-680410-m03" [8ba403df-4b0c-4fc3-a48e-457fab2a2f3e] Running
	I0115 03:03:09.271244   23809 system_pods.go:89] "kube-controller-manager-ha-680410" [fd1645ee-a6f0-497b-8809-9fab65d06c02] Running
	I0115 03:03:09.271250   23809 system_pods.go:89] "kube-controller-manager-ha-680410-m02" [aa583348-cf73-4963-b8e8-08752ecc8f5d] Running
	I0115 03:03:09.271255   23809 system_pods.go:89] "kube-controller-manager-ha-680410-m03" [951e61a0-bedd-4a46-8681-a2575b15ae24] Running
	I0115 03:03:09.271261   23809 system_pods.go:89] "kube-proxy-g2kmv" [26c4a5f1-238f-46f8-837f-692fc2c6077d] Running
	I0115 03:03:09.271265   23809 system_pods.go:89] "kube-proxy-hlbjr" [3d562b79-f315-40f2-9e01-2603e934b683] Running
	I0115 03:03:09.271274   23809 system_pods.go:89] "kube-proxy-zfn27" [91166a3e-cfbd-4a52-9816-1be24750df7d] Running
	I0115 03:03:09.271282   23809 system_pods.go:89] "kube-scheduler-ha-680410" [d9cf953e-3bb8-4ac4-b881-da730bd0efb0] Running
	I0115 03:03:09.271292   23809 system_pods.go:89] "kube-scheduler-ha-680410-m02" [56d5ea38-f044-415c-95f1-39a55675c267] Running
	I0115 03:03:09.271302   23809 system_pods.go:89] "kube-scheduler-ha-680410-m03" [cc4bebd0-a36f-4b3c-8783-227bc21a649b] Running
	I0115 03:03:09.271310   23809 system_pods.go:89] "kube-vip-ha-680410" [91251d4f-f9b3-4ecf-b1c7-fca841dec620] Running
	I0115 03:03:09.271321   23809 system_pods.go:89] "kube-vip-ha-680410-m02" [7558c85b-6238-4ee2-8180-b9f1a0c1270c] Running
	I0115 03:03:09.271327   23809 system_pods.go:89] "kube-vip-ha-680410-m03" [5c179856-a694-4cfe-a0fa-2aefaae1c9f4] Running
	I0115 03:03:09.271331   23809 system_pods.go:89] "storage-provisioner" [f82ce51d-9618-4656-91fa-3f77a60296c6] Running
	I0115 03:03:09.271339   23809 system_pods.go:126] duration metric: took 211.441542ms to wait for k8s-apps to be running ...
	I0115 03:03:09.271348   23809 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 03:03:09.271406   23809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:03:09.288173   23809 system_svc.go:56] duration metric: took 16.819764ms WaitForService to wait for kubelet
	I0115 03:03:09.288190   23809 kubeadm.go:576] duration metric: took 17.850035064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 03:03:09.288212   23809 node_conditions.go:102] verifying NodePressure condition ...
	I0115 03:03:09.455575   23809 request.go:629] Waited for 167.302649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.194:8443/api/v1/nodes
	I0115 03:03:09.455639   23809 round_trippers.go:463] GET https://192.168.39.194:8443/api/v1/nodes
	I0115 03:03:09.455647   23809 round_trippers.go:469] Request Headers:
	I0115 03:03:09.455654   23809 round_trippers.go:473]     Accept: application/json, */*
	I0115 03:03:09.455662   23809 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 03:03:09.459355   23809 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 03:03:09.461094   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:03:09.461116   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:03:09.461128   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:03:09.461133   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:03:09.461138   23809 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 03:03:09.461143   23809 node_conditions.go:123] node cpu capacity is 2
	I0115 03:03:09.461153   23809 node_conditions.go:105] duration metric: took 172.934804ms to run NodePressure ...
	I0115 03:03:09.461176   23809 start.go:240] waiting for startup goroutines ...
	I0115 03:03:09.461204   23809 start.go:254] writing updated cluster config ...
	I0115 03:03:09.461504   23809 ssh_runner.go:195] Run: rm -f paused
	I0115 03:03:09.512874   23809 start.go:599] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 03:03:09.515029   23809 out.go:177] * Done! kubectl is now configured to use "ha-680410" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71db211b3de4a       8c811b4aec35f       4 minutes ago       Running             busybox                   0                   b20d3f5bbcbc1       busybox-5bc68d56bd-g7qsd
	ac4741a7561c0       35d002bc4cbfa       5 minutes ago       Running             kube-vip                  1                   872187e8da13d       kube-vip-ha-680410
	68c9f1e1ac647       6e38f40d628db       5 minutes ago       Running             storage-provisioner       1                   80368b80a4e35       storage-provisioner
	e9b8823a0e760       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       0                   80368b80a4e35       storage-provisioner
	aca78f9075890       ead0a4a53df89       7 minutes ago       Running             coredns                   0                   18f2389d4dfa8       coredns-5dd5756b68-mqq9g
	e087fead2886d       ead0a4a53df89       7 minutes ago       Running             coredns                   0                   967aba65a98b9       coredns-5dd5756b68-krvzt
	ab177d4efea33       c7d1297425461       7 minutes ago       Running             kindnet-cni               0                   5fd8936ddd822       kindnet-jjnbw
	8395447eb2586       83f6cc407eed8       7 minutes ago       Running             kube-proxy                0                   85829bba706f2       kube-proxy-g2kmv
	330d559e17674       35d002bc4cbfa       8 minutes ago       Exited              kube-vip                  0                   872187e8da13d       kube-vip-ha-680410
	ec84efc819d75       73deb9a3f7025       8 minutes ago       Running             etcd                      0                   a9be7b584e2de       etcd-ha-680410
	7fbbef1932aec       e3db313c6dbc0       8 minutes ago       Running             kube-scheduler            0                   9bd06104e2643       kube-scheduler-ha-680410
	877ad092a6d4e       d058aa5ab969c       8 minutes ago       Running             kube-controller-manager   0                   72c0a3ac07595       kube-controller-manager-ha-680410
	7f2ebde9b0057       7fe0e6f37db33       8 minutes ago       Running             kube-apiserver            0                   72653b74ee7a0       kube-apiserver-ha-680410
	
	
	==> containerd <==
	-- Journal begins at Mon 2024-01-15 02:58:39 UTC, ends at Mon 2024-01-15 03:07:16 UTC. --
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.295506437Z" level=info msg="shim disconnected" id=330d559e17674f4b3936e8e4b4c4469ff009671f86c76a5316ca710827bb365f namespace=k8s.io
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.295732325Z" level=warning msg="cleaning up after shim disconnected" id=330d559e17674f4b3936e8e4b4c4469ff009671f86c76a5316ca710827bb365f namespace=k8s.io
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.295847595Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.899387182Z" level=info msg="CreateContainer within sandbox \"872187e8da13da13a80d6169d9f95d068b0a171528a1045606abba0d399932d8\" for container &ContainerMetadata{Name:kube-vip,Attempt:1,}"
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.931237433Z" level=info msg="CreateContainer within sandbox \"872187e8da13da13a80d6169d9f95d068b0a171528a1045606abba0d399932d8\" for &ContainerMetadata{Name:kube-vip,Attempt:1,} returns container id \"ac4741a7561c0e0caddf21a3f098abe3a3b362b568584a136639d569f915ab20\""
	Jan 15 03:01:36 ha-680410 containerd[688]: time="2024-01-15T03:01:36.932117073Z" level=info msg="StartContainer for \"ac4741a7561c0e0caddf21a3f098abe3a3b362b568584a136639d569f915ab20\""
	Jan 15 03:01:37 ha-680410 containerd[688]: time="2024-01-15T03:01:37.431904627Z" level=info msg="StartContainer for \"ac4741a7561c0e0caddf21a3f098abe3a3b362b568584a136639d569f915ab20\" returns successfully"
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.024038211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-5bc68d56bd-g7qsd,Uid:b2908bcd-6b86-4135-b114-1476eafa9743,Namespace:default,Attempt:0,}"
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.128478590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.129055368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.129272659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.129454869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.616476161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-5bc68d56bd-g7qsd,Uid:b2908bcd-6b86-4135-b114-1476eafa9743,Namespace:default,Attempt:0,} returns sandbox id \"b20d3f5bbcbc168c07a056d2b87ec9c958957640ba3d28e79ee3bfc2416f89af\""
	Jan 15 03:03:11 ha-680410 containerd[688]: time="2024-01-15T03:03:11.620890919Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.001759655Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.003321035Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=725937"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.005514468Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.008397799Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.010687806Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.011163755Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 3.390075801s"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.011235838Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.020442637Z" level=info msg="CreateContainer within sandbox \"b20d3f5bbcbc168c07a056d2b87ec9c958957640ba3d28e79ee3bfc2416f89af\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.052560257Z" level=info msg="CreateContainer within sandbox \"b20d3f5bbcbc168c07a056d2b87ec9c958957640ba3d28e79ee3bfc2416f89af\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"71db211b3de4a67dd5f5c66bf81d090cfa90907459d320e1ff52dac5a72999ef\""
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.054438668Z" level=info msg="StartContainer for \"71db211b3de4a67dd5f5c66bf81d090cfa90907459d320e1ff52dac5a72999ef\""
	Jan 15 03:03:15 ha-680410 containerd[688]: time="2024-01-15T03:03:15.145987416Z" level=info msg="StartContainer for \"71db211b3de4a67dd5f5c66bf81d090cfa90907459d320e1ff52dac5a72999ef\" returns successfully"
	
	
	==> coredns [aca78f90758903e3af45c02b6f76ed28b8f2b6ff5dbe5c843fbdce3a6bbf141b] <==
	[INFO] 10.244.1.2:42768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129162s
	[INFO] 10.244.1.2:51972 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001864917s
	[INFO] 10.244.0.4:60075 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277096s
	[INFO] 10.244.0.4:45278 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003237777s
	[INFO] 10.244.0.4:55502 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136484s
	[INFO] 10.244.2.3:46123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124123s
	[INFO] 10.244.2.3:54917 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00044995s
	[INFO] 10.244.2.3:44201 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108123s
	[INFO] 10.244.2.3:43689 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398446s
	[INFO] 10.244.2.3:32811 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092164s
	[INFO] 10.244.1.2:60995 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093737s
	[INFO] 10.244.1.2:41143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00114436s
	[INFO] 10.244.1.2:38837 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189027s
	[INFO] 10.244.0.4:35960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098321s
	[INFO] 10.244.0.4:40580 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131103s
	[INFO] 10.244.2.3:32957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177447s
	[INFO] 10.244.2.3:38476 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126417s
	[INFO] 10.244.2.3:40136 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112359s
	[INFO] 10.244.2.3:48767 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081795s
	[INFO] 10.244.1.2:43184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139657s
	[INFO] 10.244.1.2:34920 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00019567s
	[INFO] 10.244.0.4:34771 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00012137s
	[INFO] 10.244.2.3:59729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144205s
	[INFO] 10.244.2.3:36371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000391863s
	[INFO] 10.244.1.2:38862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151045s
	
	
	==> coredns [e087fead2886d903d40cde823e6b0c43bf07fc180213b27e12a6aa979d3c7013] <==
	[INFO] 10.244.0.4:41716 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000645671s
	[INFO] 10.244.0.4:40319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006606911s
	[INFO] 10.244.0.4:56428 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199319s
	[INFO] 10.244.0.4:49993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137413s
	[INFO] 10.244.0.4:45822 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196298s
	[INFO] 10.244.2.3:49969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001807508s
	[INFO] 10.244.2.3:59754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103654s
	[INFO] 10.244.2.3:46269 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153092s
	[INFO] 10.244.1.2:37428 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189523s
	[INFO] 10.244.1.2:52125 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001784134s
	[INFO] 10.244.1.2:45242 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077649s
	[INFO] 10.244.1.2:51192 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076033s
	[INFO] 10.244.1.2:51654 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148666s
	[INFO] 10.244.0.4:57400 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191186s
	[INFO] 10.244.0.4:58502 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035427s
	[INFO] 10.244.1.2:60504 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145899s
	[INFO] 10.244.1.2:58875 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080658s
	[INFO] 10.244.0.4:35193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178782s
	[INFO] 10.244.0.4:36630 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175501s
	[INFO] 10.244.0.4:44483 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215314s
	[INFO] 10.244.2.3:48596 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116005s
	[INFO] 10.244.2.3:50671 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000334676s
	[INFO] 10.244.1.2:40815 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175593s
	[INFO] 10.244.1.2:43884 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179218s
	[INFO] 10.244.1.2:43516 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000323765s
	
	
	==> describe nodes <==
	Name:               ha-680410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-680410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=ha-680410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T02_59_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 02:59:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-680410
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 03:07:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 03:03:23 +0000   Mon, 15 Jan 2024 02:59:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 03:03:23 +0000   Mon, 15 Jan 2024 02:59:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 03:03:23 +0000   Mon, 15 Jan 2024 02:59:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 03:03:23 +0000   Mon, 15 Jan 2024 02:59:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    ha-680410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fccdf8018b24813b18cf29e87dcf19a
	  System UUID:                4fccdf80-18b2-4813-b18c-f29e87dcf19a
	  Boot ID:                    663c1288-e3cb-4dbf-b88e-8ae64994e27f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-g7qsd             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 coredns-5dd5756b68-krvzt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m51s
	  kube-system                 coredns-5dd5756b68-mqq9g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m51s
	  kube-system                 etcd-ha-680410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m2s
	  kube-system                 kindnet-jjnbw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m51s
	  kube-system                 kube-apiserver-ha-680410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                 kube-controller-manager-ha-680410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                 kube-proxy-g2kmv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-scheduler-ha-680410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                 kube-vip-ha-680410                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m50s  kube-proxy       
	  Normal  Starting                 8m1s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m1s   kubelet          Node ha-680410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m1s   kubelet          Node ha-680410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m1s   kubelet          Node ha-680410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m52s  node-controller  Node ha-680410 event: Registered Node ha-680410 in Controller
	  Normal  NodeReady                7m46s  kubelet          Node ha-680410 status is now: NodeReady
	  Normal  RegisteredNode           5m22s  node-controller  Node ha-680410 event: Registered Node ha-680410 in Controller
	  Normal  RegisteredNode           4m12s  node-controller  Node ha-680410 event: Registered Node ha-680410 in Controller
	
	
	Name:               ha-680410-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-680410-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=ha-680410
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T03_01_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 03:01:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-680410-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 03:04:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 15 Jan 2024 03:03:24 +0000   Mon, 15 Jan 2024 03:05:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 15 Jan 2024 03:03:24 +0000   Mon, 15 Jan 2024 03:05:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 15 Jan 2024 03:03:24 +0000   Mon, 15 Jan 2024 03:05:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 15 Jan 2024 03:03:24 +0000   Mon, 15 Jan 2024 03:05:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-680410-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 74d97e54f9884977b04fa0f9a8f6f4bf
	  System UUID:                74d97e54-f988-4977-b04f-a0f9a8f6f4bf
	  Boot ID:                    15c976f5-aa7d-4b1e-bcc6-fad76ebdfe1a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xq99z                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 etcd-ha-680410-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m51s
	  kube-system                 kindnet-qcjzf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m52s
	  kube-system                 kube-apiserver-ha-680410-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-controller-manager-ha-680410-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-proxy-hlbjr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 kube-scheduler-ha-680410-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-vip-ha-680410-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m33s  kube-proxy       
	  Normal  RegisteredNode  5m22s  node-controller  Node ha-680410-m02 event: Registered Node ha-680410-m02 in Controller
	  Normal  RegisteredNode  4m12s  node-controller  Node ha-680410-m02 event: Registered Node ha-680410-m02 in Controller
	  Normal  NodeNotReady    97s    node-controller  Node ha-680410-m02 status is now: NodeNotReady
	
	
	Name:               ha-680410-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-680410-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=ha-680410
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T03_02_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 03:02:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-680410-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 03:07:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 03:03:17 +0000   Mon, 15 Jan 2024 03:02:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 03:03:17 +0000   Mon, 15 Jan 2024 03:02:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 03:03:17 +0000   Mon, 15 Jan 2024 03:02:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 03:03:17 +0000   Mon, 15 Jan 2024 03:02:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-680410-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5e9167a97d54e5784fc7b8cdfd0c427
	  System UUID:                a5e9167a-97d5-4e57-84fc-7b8cdfd0c427
	  Boot ID:                    cc914440-a625-4494-998c-44556ff1dd60
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-h2zgj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 etcd-ha-680410-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m30s
	  kube-system                 kindnet-hw4rx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m30s
	  kube-system                 kube-apiserver-ha-680410-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-controller-manager-ha-680410-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-proxy-zfn27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-scheduler-ha-680410-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-vip-ha-680410-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m26s  kube-proxy       
	  Normal  RegisteredNode  4m27s  node-controller  Node ha-680410-m03 event: Registered Node ha-680410-m03 in Controller
	  Normal  RegisteredNode  4m27s  node-controller  Node ha-680410-m03 event: Registered Node ha-680410-m03 in Controller
	  Normal  RegisteredNode  4m12s  node-controller  Node ha-680410-m03 event: Registered Node ha-680410-m03 in Controller
	
	
	Name:               ha-680410-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-680410-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a1913e45675b140227afacc1188b5058b7d6a5b
	                    minikube.k8s.io/name=ha-680410
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T03_04_24_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 03:04:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-680410-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 03:07:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 03:04:54 +0000   Mon, 15 Jan 2024 03:04:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 03:04:54 +0000   Mon, 15 Jan 2024 03:04:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 03:04:54 +0000   Mon, 15 Jan 2024 03:04:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 03:04:54 +0000   Mon, 15 Jan 2024 03:04:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-680410-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a96c93c5f4c248fdb2fe3b8a30beaa9c
	  System UUID:                a96c93c5-f4c2-48fd-b2fe-3b8a30beaa9c
	  Boot ID:                    9afbc99b-309c-467a-8dbb-872148a7c4be
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-f7bpb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-5kthb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x5 over 2m54s)  kubelet          Node ha-680410-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x5 over 2m54s)  kubelet          Node ha-680410-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x5 over 2m54s)  kubelet          Node ha-680410-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-680410-m04 event: Registered Node ha-680410-m04 in Controller
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-680410-m04 event: Registered Node ha-680410-m04 in Controller
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-680410-m04 event: Registered Node ha-680410-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-680410-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan15 02:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067614] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.334408] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.238048] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144249] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.062200] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.651400] systemd-fstab-generator[557]: Ignoring "noauto" for root device
	[  +0.109143] systemd-fstab-generator[568]: Ignoring "noauto" for root device
	[  +0.140488] systemd-fstab-generator[582]: Ignoring "noauto" for root device
	[  +0.109723] systemd-fstab-generator[593]: Ignoring "noauto" for root device
	[  +0.226340] systemd-fstab-generator[620]: Ignoring "noauto" for root device
	[  +5.696852] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.696529] systemd-fstab-generator[736]: Ignoring "noauto" for root device
	[Jan15 02:59] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[ +10.813267] systemd-fstab-generator[1362]: Ignoring "noauto" for root device
	[ +17.501730] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [ec84efc819d758b087500345117940598a649b6a4f78324c12cdebe9fc4e3902] <==
	{"level":"warn","ts":"2024-01-15T03:07:16.353754Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.357653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.383381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.387487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.392665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.400588Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.406137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.409724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.418468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.419208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.424816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.430681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.433906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.437362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.444994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.450788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.456481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.461617Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.465098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.466287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.472606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.478534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.487651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.48822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-01-15T03:07:16.539811Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b4bd7d4638784c91","from":"b4bd7d4638784c91","remote-peer-id":"425d0f171127d180","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:07:16 up 8 min,  0 users,  load average: 0.55, 0.44, 0.24
	Linux ha-680410 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [ab177d4efea33a14988c75111214a7dafe82cbfa3018bad64936313aa34a7b1c] <==
	I0115 03:06:40.659388       1 main.go:250] Node ha-680410-m04 has CIDR [10.244.3.0/24] 
	I0115 03:06:50.677395       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0115 03:06:50.677443       1 main.go:227] handling current node
	I0115 03:06:50.677462       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0115 03:06:50.677468       1 main.go:250] Node ha-680410-m02 has CIDR [10.244.1.0/24] 
	I0115 03:06:50.677776       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0115 03:06:50.677811       1 main.go:250] Node ha-680410-m03 has CIDR [10.244.2.0/24] 
	I0115 03:06:50.678013       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0115 03:06:50.678045       1 main.go:250] Node ha-680410-m04 has CIDR [10.244.3.0/24] 
	I0115 03:07:00.690155       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0115 03:07:00.690204       1 main.go:227] handling current node
	I0115 03:07:00.690221       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0115 03:07:00.690227       1 main.go:250] Node ha-680410-m02 has CIDR [10.244.1.0/24] 
	I0115 03:07:00.690645       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0115 03:07:00.690685       1 main.go:250] Node ha-680410-m03 has CIDR [10.244.2.0/24] 
	I0115 03:07:00.690748       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0115 03:07:00.690753       1 main.go:250] Node ha-680410-m04 has CIDR [10.244.3.0/24] 
	I0115 03:07:10.704663       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0115 03:07:10.704710       1 main.go:227] handling current node
	I0115 03:07:10.704735       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0115 03:07:10.704741       1 main.go:250] Node ha-680410-m02 has CIDR [10.244.1.0/24] 
	I0115 03:07:10.704845       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0115 03:07:10.704876       1 main.go:250] Node ha-680410-m03 has CIDR [10.244.2.0/24] 
	I0115 03:07:10.704921       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0115 03:07:10.705359       1 main.go:250] Node ha-680410-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7f2ebde9b00575ac2b6c28bd14dbc5b2681fc477999ccd8101b7af4f1eec374a] <==
	Trace[172462707]: [4.956611513s] [4.956611513s] END
	I0115 03:01:40.103276       1 trace.go:236] Trace[1664719019]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:6aebf58f-b5a2-4122-9685-43b6042b1762,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-3epklqu42frowwycwc5b3xum5u,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (15-Jan-2024 03:01:39.602) (total time: 501ms):
	Trace[1664719019]: ["GuaranteedUpdate etcd3" audit-id:6aebf58f-b5a2-4122-9685-43b6042b1762,key:/leases/kube-system/apiserver-3epklqu42frowwycwc5b3xum5u,type:*coordination.Lease,resource:leases.coordination.k8s.io 500ms (03:01:39.602)
	Trace[1664719019]:  ---"Txn call completed" 499ms (03:01:40.103)]
	Trace[1664719019]: [501.090932ms] [501.090932ms] END
	I0115 03:01:40.132646       1 trace.go:236] Trace[1561202185]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3a904893-3107-405e-904c-6f3f5a201318,client:192.168.39.178,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Jan-2024 03:01:33.226) (total time: 6906ms):
	Trace[1561202185]: ["Create etcd3" audit-id:3a904893-3107-405e-904c-6f3f5a201318,key:/pods/kube-system/kube-controller-manager-ha-680410-m02,type:*core.Pod,resource:pods 6895ms (03:01:33.236)
	Trace[1561202185]:  ---"Txn call succeeded" 6856ms (03:01:40.093)]
	Trace[1561202185]: ---"Write to database call failed" len:2375,err:pods "kube-controller-manager-ha-680410-m02" already exists 38ms (03:01:40.132)
	Trace[1561202185]: [6.906197571s] [6.906197571s] END
	I0115 03:01:40.133353       1 trace.go:236] Trace[843603697]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:eb61f224-0bff-4d13-a822-7c6c5684cc21,client:192.168.39.178,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Jan-2024 03:01:33.236) (total time: 6896ms):
	Trace[843603697]: ["Create etcd3" audit-id:eb61f224-0bff-4d13-a822-7c6c5684cc21,key:/pods/kube-system/kube-scheduler-ha-680410-m02,type:*core.Pod,resource:pods 6895ms (03:01:33.237)
	Trace[843603697]:  ---"Txn call succeeded" 6855ms (03:01:40.093)]
	Trace[843603697]: ---"Write to database call failed" len:1220,err:pods "kube-scheduler-ha-680410-m02" already exists 40ms (03:01:40.133)
	Trace[843603697]: [6.896605563s] [6.896605563s] END
	I0115 03:01:40.139727       1 trace.go:236] Trace[1201279704]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:dd127ad8-253d-45ee-a66e-98e9eeb15c77,client:192.168.39.178,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Jan-2024 03:01:33.234) (total time: 6905ms):
	Trace[1201279704]: ["Create etcd3" audit-id:dd127ad8-253d-45ee-a66e-98e9eeb15c77,key:/pods/kube-system/etcd-ha-680410-m02,type:*core.Pod,resource:pods 6901ms (03:01:33.237)
	Trace[1201279704]:  ---"Txn call succeeded" 6858ms (03:01:40.096)]
	Trace[1201279704]: ---"Write to database call failed" len:2214,err:pods "etcd-ha-680410-m02" already exists 42ms (03:01:40.139)
	Trace[1201279704]: [6.905430194s] [6.905430194s] END
	E0115 03:03:47.968507       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.194:47330->192.168.39.194:10250: write: connection reset by peer
	E0115 03:03:48.832093       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.194:52604->192.168.39.182:10250: write: broken pipe
	E0115 03:03:49.454503       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.194:52620->192.168.39.182:10250: write: broken pipe
	E0115 03:03:50.834553       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.39.254:8443->192.168.39.1:54982: write: connection reset by peer
	W0115 03:05:11.296259       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.194]
	
	
	==> kube-controller-manager [877ad092a6d4e30ab9be6b910e6a316a240074794036a32f8a12817c49097d08] <==
	I0115 03:03:47.356250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.065µs"
	E0115 03:04:22.126407       1 certificate_controller.go:146] Sync csr-jfxlq failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-jfxlq": the object has been modified; please apply your changes to the latest version and try again
	I0115 03:04:23.646608       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-680410-m04\" does not exist"
	I0115 03:04:23.693989       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-95xln"
	I0115 03:04:23.694054       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-f7bpb"
	I0115 03:04:23.705433       1 range_allocator.go:380] "Set node PodCIDR" node="ha-680410-m04" podCIDRs=["10.244.3.0/24"]
	I0115 03:04:23.856529       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-qb2ms"
	I0115 03:04:23.912707       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-95xln"
	I0115 03:04:23.930873       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-w6km4"
	I0115 03:04:23.972833       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-5shm7"
	I0115 03:04:24.690247       1 event.go:307] "Event occurred" object="ha-680410-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-680410-m04 event: Registered Node ha-680410-m04 in Controller"
	I0115 03:04:24.711360       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-680410-m04"
	I0115 03:04:33.771036       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-680410-m04"
	I0115 03:05:39.740267       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-680410-m04"
	I0115 03:05:39.740525       1 event.go:307] "Event occurred" object="ha-680410-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-680410-m02 status is now: NodeNotReady"
	I0115 03:05:39.760591       1 event.go:307] "Event occurred" object="kube-system/kindnet-qcjzf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.783095       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.799424       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.816376       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.831430       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.848497       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-680410-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.860879       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-xq99z" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.882132       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-hlbjr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0115 03:05:39.925248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.582653ms"
	I0115 03:05:39.925357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.447µs"
	
	
	==> kube-proxy [8395447eb258631ee0befb4adcb4a9be346a5a3f24c2351fc199a299caec200e] <==
	I0115 02:59:26.147708       1 server_others.go:69] "Using iptables proxy"
	I0115 02:59:26.162190       1 node.go:141] Successfully retrieved node IP: 192.168.39.194
	I0115 02:59:26.217320       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 02:59:26.217341       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 02:59:26.223236       1 server_others.go:152] "Using iptables Proxier"
	I0115 02:59:26.223545       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 02:59:26.224394       1 server.go:846] "Version info" version="v1.28.4"
	I0115 02:59:26.224584       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 02:59:26.225617       1 config.go:188] "Starting service config controller"
	I0115 02:59:26.225876       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 02:59:26.226022       1 config.go:97] "Starting endpoint slice config controller"
	I0115 02:59:26.226072       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 02:59:26.228817       1 config.go:315] "Starting node config controller"
	I0115 02:59:26.229012       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 02:59:26.326588       1 shared_informer.go:318] Caches are synced for service config
	I0115 02:59:26.326762       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 02:59:26.337116       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7fbbef1932aecead8bb6799eeb70b29c85e5d27cc193309ee6ae4e88777cc0b5] <==
	I0115 03:03:10.542617       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5bc68d56bd-xhzg2" node="ha-680410-m03"
	E0115 03:03:10.578262       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5bc68d56bd-xq99z\": pod busybox-5bc68d56bd-xq99z is already assigned to node \"ha-680410-m02\"" plugin="DefaultBinder" pod="default/busybox-5bc68d56bd-xq99z" node="ha-680410-m02"
	E0115 03:03:10.578426       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 06f2f28a-9208-4fa7-aff9-5fa40942c0b7(default/busybox-5bc68d56bd-xq99z) wasn't assumed so cannot be forgotten"
	E0115 03:03:10.580286       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5bc68d56bd-xq99z\": pod busybox-5bc68d56bd-xq99z is already assigned to node \"ha-680410-m02\"" pod="default/busybox-5bc68d56bd-xq99z"
	I0115 03:03:10.580482       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5bc68d56bd-xq99z" node="ha-680410-m02"
	E0115 03:03:45.481359       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5bc68d56bd-h2zgj\": pod busybox-5bc68d56bd-h2zgj is already assigned to node \"ha-680410-m03\"" plugin="DefaultBinder" pod="default/busybox-5bc68d56bd-h2zgj" node="ha-680410-m03"
	E0115 03:03:45.481548       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 722c108e-ed9d-4116-89a6-872c6c470ad1(default/busybox-5bc68d56bd-h2zgj) wasn't assumed so cannot be forgotten"
	E0115 03:03:45.481625       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5bc68d56bd-h2zgj\": pod busybox-5bc68d56bd-h2zgj is already assigned to node \"ha-680410-m03\"" pod="default/busybox-5bc68d56bd-h2zgj"
	I0115 03:03:45.481683       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5bc68d56bd-h2zgj" node="ha-680410-m03"
	E0115 03:04:23.731876       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-95xln\": pod kube-proxy-95xln is already assigned to node \"ha-680410-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-95xln" node="ha-680410-m04"
	E0115 03:04:23.734474       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod ab95e21a-71ee-4861-aad4-9bfd0021caea(kube-system/kube-proxy-95xln) wasn't assumed so cannot be forgotten"
	E0115 03:04:23.735022       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-95xln\": pod kube-proxy-95xln is already assigned to node \"ha-680410-m04\"" pod="kube-system/kube-proxy-95xln"
	I0115 03:04:23.735510       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-95xln" node="ha-680410-m04"
	E0115 03:04:23.825216       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qb2ms\": pod kindnet-qb2ms is already assigned to node \"ha-680410-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qb2ms" node="ha-680410-m04"
	E0115 03:04:23.826232       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 7482e17d-b233-41e2-8475-a4cb43663e1c(kube-system/kindnet-qb2ms) wasn't assumed so cannot be forgotten"
	E0115 03:04:23.825914       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5shm7\": pod kube-proxy-5shm7 is already assigned to node \"ha-680410-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5shm7" node="ha-680410-m04"
	E0115 03:04:23.827817       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e80768c7-121a-4da8-9428-b9ee6922e2be(kube-system/kube-proxy-5shm7) wasn't assumed so cannot be forgotten"
	E0115 03:04:23.827777       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qb2ms\": pod kindnet-qb2ms is already assigned to node \"ha-680410-m04\"" pod="kube-system/kindnet-qb2ms"
	I0115 03:04:23.830313       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qb2ms" node="ha-680410-m04"
	E0115 03:04:23.831900       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5shm7\": pod kube-proxy-5shm7 is already assigned to node \"ha-680410-m04\"" pod="kube-system/kube-proxy-5shm7"
	I0115 03:04:23.836812       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5shm7" node="ha-680410-m04"
	E0115 03:04:23.876123       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5kthb\": pod kube-proxy-5kthb is already assigned to node \"ha-680410-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5kthb" node="ha-680410-m04"
	E0115 03:04:23.876829       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e3d21333-880a-4624-8ae4-cc3ee7b558b9(kube-system/kube-proxy-5kthb) wasn't assumed so cannot be forgotten"
	E0115 03:04:23.876920       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5kthb\": pod kube-proxy-5kthb is already assigned to node \"ha-680410-m04\"" pod="kube-system/kube-proxy-5kthb"
	I0115 03:04:23.878223       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5kthb" node="ha-680410-m04"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 02:58:39 UTC, ends at Mon 2024-01-15 03:07:16 UTC. --
	Jan 15 03:03:10 ha-680410 kubelet[1369]: I0115 03:03:10.769528    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njn9g\" (UniqueName: \"kubernetes.io/projected/b2908bcd-6b86-4135-b114-1476eafa9743-kube-api-access-njn9g\") pod \"busybox-5bc68d56bd-g7qsd\" (UID: \"b2908bcd-6b86-4135-b114-1476eafa9743\") " pod="default/busybox-5bc68d56bd-g7qsd"
	Jan 15 03:03:11 ha-680410 kubelet[1369]: I0115 03:03:11.280115    1369 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w42zl\" (UniqueName: \"kubernetes.io/projected/bc9bf224-d003-41e8-9cda-ba1d8ae491e3-kube-api-access-w42zl\") pod \"bc9bf224-d003-41e8-9cda-ba1d8ae491e3\" (UID: \"bc9bf224-d003-41e8-9cda-ba1d8ae491e3\") "
	Jan 15 03:03:11 ha-680410 kubelet[1369]: I0115 03:03:11.293133    1369 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9bf224-d003-41e8-9cda-ba1d8ae491e3-kube-api-access-w42zl" (OuterVolumeSpecName: "kube-api-access-w42zl") pod "bc9bf224-d003-41e8-9cda-ba1d8ae491e3" (UID: "bc9bf224-d003-41e8-9cda-ba1d8ae491e3"). InnerVolumeSpecName "kube-api-access-w42zl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 03:03:11 ha-680410 kubelet[1369]: I0115 03:03:11.381378    1369 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w42zl\" (UniqueName: \"kubernetes.io/projected/bc9bf224-d003-41e8-9cda-ba1d8ae491e3-kube-api-access-w42zl\") on node \"ha-680410\" DevicePath \"\""
	Jan 15 03:03:13 ha-680410 kubelet[1369]: I0115 03:03:13.465208    1369 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bc9bf224-d003-41e8-9cda-ba1d8ae491e3" path="/var/lib/kubelet/pods/bc9bf224-d003-41e8-9cda-ba1d8ae491e3/volumes"
	Jan 15 03:03:15 ha-680410 kubelet[1369]: E0115 03:03:15.514412    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:03:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:03:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:03:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 03:04:15 ha-680410 kubelet[1369]: E0115 03:04:15.510800    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:04:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:04:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:04:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 03:05:15 ha-680410 kubelet[1369]: E0115 03:05:15.514070    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:05:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:05:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:05:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 03:06:15 ha-680410 kubelet[1369]: E0115 03:06:15.515060    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:06:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:06:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:06:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 03:07:15 ha-680410 kubelet[1369]: E0115 03:07:15.512302    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 03:07:15 ha-680410 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 03:07:15 ha-680410 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 03:07:15 ha-680410 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-680410 -n ha-680410
helpers_test.go:261: (dbg) Run:  kubectl --context ha-680410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestHA/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestHA/serial/RestartSecondaryNode (56.92s)

                                                
                                    

Test pass (295/337)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 59.51
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 49.18
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 54.79
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 126.74
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 216.72
38 TestAddons/parallel/Registry 19.54
40 TestAddons/parallel/InspektorGadget 11.03
41 TestAddons/parallel/MetricsServer 6.97
42 TestAddons/parallel/HelmTiller 30.67
44 TestAddons/parallel/CSI 70.97
45 TestAddons/parallel/Headlamp 41.06
46 TestAddons/parallel/CloudSpanner 5.62
47 TestAddons/parallel/LocalPath 58.41
48 TestAddons/parallel/NvidiaDevicePlugin 5.81
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 92.35
54 TestCertOptions 57.81
55 TestCertExpiration 270.25
57 TestForceSystemdFlag 51.39
58 TestForceSystemdEnv 99.86
60 TestKVMDriverInstallOrUpdate 22.25
64 TestErrorSpam/setup 48.58
65 TestErrorSpam/start 0.36
66 TestErrorSpam/status 0.77
67 TestErrorSpam/pause 1.55
68 TestErrorSpam/unpause 1.63
69 TestErrorSpam/stop 4.7
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 100.18
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 42.35
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.84
81 TestFunctional/serial/CacheCmd/cache/add_local 3.96
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 49.03
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.4
92 TestFunctional/serial/LogsFileCmd 1.43
93 TestFunctional/serial/InvalidService 4.79
95 TestFunctional/parallel/ConfigCmd 0.37
96 TestFunctional/parallel/DashboardCmd 12.06
97 TestFunctional/parallel/DryRun 0.28
98 TestFunctional/parallel/InternationalLanguage 0.14
99 TestFunctional/parallel/StatusCmd 0.77
103 TestFunctional/parallel/ServiceCmdConnect 14.54
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 52.01
107 TestFunctional/parallel/SSHCmd 0.41
108 TestFunctional/parallel/CpCmd 1.49
109 TestFunctional/parallel/MySQL 29.09
110 TestFunctional/parallel/FileSync 0.21
111 TestFunctional/parallel/CertSync 1.3
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
119 TestFunctional/parallel/License 0.8
120 TestFunctional/parallel/Version/short 0.07
121 TestFunctional/parallel/Version/components 0.63
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.82
127 TestFunctional/parallel/ImageCommands/Setup 2.51
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.24
141 TestFunctional/parallel/MountCmd/any-port 19.51
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.77
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.3
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.2
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.67
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.86
148 TestFunctional/parallel/MountCmd/specific-port 2.24
149 TestFunctional/parallel/MountCmd/VerifyCleanup 0.85
150 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
152 TestFunctional/parallel/ProfileCmd/profile_list 0.34
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
154 TestFunctional/parallel/ServiceCmd/List 1.26
155 TestFunctional/parallel/ServiceCmd/JSONOutput 1.28
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
157 TestFunctional/parallel/ServiceCmd/Format 0.33
158 TestFunctional/parallel/ServiceCmd/URL 0.32
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestHA/serial/StartCluster 282.35
166 TestHA/serial/DeployApp 39.49
167 TestHA/serial/PingHostFromPods 1.36
168 TestHA/serial/AddWorkerNode 49.85
169 TestHA/serial/NodeLabels 0.07
170 TestHA/serial/HAppyAfterClusterStart 0.6
171 TestHA/serial/CopyFile 13.64
173 TestHA/serial/DegradedAfterControlPlaneNodeStop 3.51
175 TestHA/serial/HAppyAfterSecondaryNodeRestart 0.43
176 TestHA/serial/RestartClusterKeepsNodes 394.44
177 TestHA/serial/DeleteSecondaryNode 7.98
178 TestHA/serial/DegradedAfterSecondaryNodeDelete 0.39
179 TestHA/serial/StopCluster 275.33
180 TestHA/serial/RestartCluster 157.09
181 TestHA/serial/DegradedAfterClusterRestart 0.42
182 TestHA/serial/AddSecondaryNode 74.32
183 TestHA/serial/HAppyAfterSecondaryNodeAdd 0.58
186 TestIngressAddonLegacy/StartLegacyK8sCluster 94.64
188 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.42
189 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
190 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.5
193 TestJSONOutput/start/Command 100.64
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.66
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.59
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 7.24
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.21
221 TestMainNoArgs 0.05
222 TestMinikubeProfile 102.01
225 TestMountStart/serial/StartWithMountFirst 31.04
226 TestMountStart/serial/VerifyMountFirst 0.41
227 TestMountStart/serial/StartWithMountSecond 30.29
228 TestMountStart/serial/VerifyMountSecond 0.4
229 TestMountStart/serial/DeleteFirst 0.71
230 TestMountStart/serial/VerifyMountPostDelete 0.5
231 TestMountStart/serial/Stop 1.42
232 TestMountStart/serial/RestartStopped 25.32
233 TestMountStart/serial/VerifyMountPostStop 0.41
236 TestMultiNode/serial/FreshStart2Nodes 108.94
237 TestMultiNode/serial/DeployApp2Nodes 6.18
238 TestMultiNode/serial/PingHostFrom2Pods 0.93
239 TestMultiNode/serial/AddNode 43.85
240 TestMultiNode/serial/MultiNodeLabels 0.06
241 TestMultiNode/serial/ProfileList 0.23
242 TestMultiNode/serial/CopyFile 7.64
243 TestMultiNode/serial/StopNode 2.28
244 TestMultiNode/serial/StartAfterStop 29.6
245 TestMultiNode/serial/RestartKeepsNodes 304.98
246 TestMultiNode/serial/DeleteNode 2.18
247 TestMultiNode/serial/StopMultiNode 183.87
248 TestMultiNode/serial/RestartMultiNode 114.73
249 TestMultiNode/serial/ValidateNameConflict 48.96
254 TestPreload 297.58
256 TestScheduledStopUnix 118.12
260 TestRunningBinaryUpgrade 235.39
262 TestKubernetesUpgrade 205.13
265 TestStoppedBinaryUpgrade/Setup 3.73
266 TestPause/serial/Start 72.73
267 TestStoppedBinaryUpgrade/Upgrade 242.23
268 TestPause/serial/SecondStartNoReconfiguration 69.97
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
271 TestNoKubernetes/serial/StartWithK8s 52.04
279 TestNetworkPlugins/group/false 3.86
283 TestPause/serial/Pause 0.69
284 TestPause/serial/VerifyStatus 0.29
285 TestPause/serial/Unpause 0.69
286 TestPause/serial/PauseAgain 0.84
287 TestPause/serial/DeletePaused 0.81
288 TestPause/serial/VerifyDeletedResources 1.77
289 TestNoKubernetes/serial/StartWithStopK8s 76.5
297 TestNoKubernetes/serial/Start 31.98
298 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
299 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
300 TestNoKubernetes/serial/ProfileList 1.14
301 TestNoKubernetes/serial/Stop 1.35
302 TestNoKubernetes/serial/StartNoArgs 68.09
303 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
304 TestNetworkPlugins/group/auto/Start 101.8
305 TestNetworkPlugins/group/kindnet/Start 83.77
306 TestNetworkPlugins/group/calico/Start 133.98
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/auto/KubeletFlags 0.24
309 TestNetworkPlugins/group/auto/NetCatPod 11.29
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
311 TestNetworkPlugins/group/kindnet/NetCatPod 9.34
312 TestNetworkPlugins/group/kindnet/DNS 0.23
313 TestNetworkPlugins/group/kindnet/Localhost 0.17
314 TestNetworkPlugins/group/kindnet/HairPin 0.16
315 TestNetworkPlugins/group/auto/DNS 0.24
316 TestNetworkPlugins/group/auto/Localhost 0.2
317 TestNetworkPlugins/group/auto/HairPin 0.21
318 TestNetworkPlugins/group/custom-flannel/Start 89.44
319 TestNetworkPlugins/group/enable-default-cni/Start 101.33
320 TestNetworkPlugins/group/flannel/Start 134.74
321 TestNetworkPlugins/group/calico/ControllerPod 6.01
322 TestNetworkPlugins/group/calico/KubeletFlags 0.26
323 TestNetworkPlugins/group/calico/NetCatPod 11.27
324 TestNetworkPlugins/group/calico/DNS 0.18
325 TestNetworkPlugins/group/calico/Localhost 0.13
326 TestNetworkPlugins/group/calico/HairPin 0.13
327 TestNetworkPlugins/group/bridge/Start 101.96
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.5
330 TestNetworkPlugins/group/custom-flannel/DNS 0.17
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
339 TestStartStop/group/old-k8s-version/serial/FirstStart 138.84
341 TestStartStop/group/no-preload/serial/FirstStart 211.99
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
344 TestNetworkPlugins/group/flannel/NetCatPod 9.21
345 TestNetworkPlugins/group/flannel/DNS 0.19
346 TestNetworkPlugins/group/flannel/Localhost 0.15
347 TestNetworkPlugins/group/flannel/HairPin 0.14
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
349 TestNetworkPlugins/group/bridge/NetCatPod 11.39
351 TestStartStop/group/embed-certs/serial/FirstStart 114.6
352 TestNetworkPlugins/group/bridge/DNS 0.2
353 TestNetworkPlugins/group/bridge/Localhost 0.15
354 TestNetworkPlugins/group/bridge/HairPin 0.15
356 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.59
357 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
359 TestStartStop/group/old-k8s-version/serial/Stop 92.41
360 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
362 TestStartStop/group/embed-certs/serial/DeployApp 11.26
363 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.05
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
365 TestStartStop/group/embed-certs/serial/Stop 92.63
366 TestStartStop/group/no-preload/serial/DeployApp 11.27
367 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
368 TestStartStop/group/no-preload/serial/Stop 92.07
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
370 TestStartStop/group/old-k8s-version/serial/SecondStart 107.66
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
372 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 322.63
373 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
374 TestStartStop/group/embed-certs/serial/SecondStart 405.62
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
376 TestStartStop/group/no-preload/serial/SecondStart 300.67
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 7.01
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/old-k8s-version/serial/Pause 2.8
382 TestStartStop/group/newest-cni/serial/FirstStart 60.08
383 TestStartStop/group/newest-cni/serial/DeployApp 0
384 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
385 TestStartStop/group/newest-cni/serial/Stop 92.3
386 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
387 TestStartStop/group/newest-cni/serial/SecondStart 36.2
388 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
391 TestStartStop/group/newest-cni/serial/Pause 2.73
392 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 16.01
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
395 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.66
396 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
397 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
398 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
399 TestStartStop/group/no-preload/serial/Pause 2.56
400 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
401 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
402 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
403 TestStartStop/group/embed-certs/serial/Pause 2.49
x
+
TestDownloadOnly/v1.16.0/json-events (59.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-151909 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-151909 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (59.511315024s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (59.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-151909
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-151909: exit status 85 (71.443012ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-151909 | jenkins | v1.32.0 | 15 Jan 24 02:43 UTC |          |
	|         | -p download-only-151909        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 02:43:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 02:43:31.317407   14966 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:43:31.317555   14966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:43:31.317564   14966 out.go:309] Setting ErrFile to fd 2...
	I0115 02:43:31.317572   14966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:43:31.317738   14966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	W0115 02:43:31.317871   14966 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17909-7685/.minikube/config/config.json: open /home/jenkins/minikube-integration/17909-7685/.minikube/config/config.json: no such file or directory
	I0115 02:43:31.318439   14966 out.go:303] Setting JSON to true
	I0115 02:43:31.319205   14966 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1556,"bootTime":1705285055,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 02:43:31.319262   14966 start.go:138] virtualization: kvm guest
	I0115 02:43:31.321757   14966 out.go:97] [download-only-151909] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 02:43:31.323247   14966 out.go:169] MINIKUBE_LOCATION=17909
	W0115 02:43:31.321854   14966 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball: no such file or directory
	I0115 02:43:31.321893   14966 notify.go:220] Checking for updates...
	I0115 02:43:31.325998   14966 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 02:43:31.327515   14966 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:43:31.328909   14966 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:43:31.330309   14966 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 02:43:31.332894   14966 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 02:43:31.333150   14966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 02:43:31.435373   14966 out.go:97] Using the kvm2 driver based on user configuration
	I0115 02:43:31.435431   14966 start.go:296] selected driver: kvm2
	I0115 02:43:31.435443   14966 start.go:900] validating driver "kvm2" against <nil>
	I0115 02:43:31.435781   14966 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:43:31.435904   14966 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17909-7685/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 02:43:31.448906   14966 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 02:43:31.448952   14966 start_flags.go:308] no existing cluster config was found, will generate one from the flags 
	I0115 02:43:31.449413   14966 start_flags.go:391] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0115 02:43:31.449559   14966 start_flags.go:925] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 02:43:31.449615   14966 cni.go:84] Creating CNI manager for ""
	I0115 02:43:31.449632   14966 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 02:43:31.449639   14966 start_flags.go:317] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 02:43:31.449687   14966 start.go:339] cluster config:
	{Name:download-only-151909 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-151909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 02:43:31.449834   14966 iso.go:125] acquiring lock: {Name:mk557eda9a6ce643c635b77cd4c9cb212ca64fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:43:31.451714   14966 out.go:97] Downloading VM boot image ...
	I0115 02:43:31.451746   14966 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 02:43:42.379075   14966 out.go:97] Starting "download-only-151909" primary control-plane node in "download-only-151909" cluster
	I0115 02:43:42.379106   14966 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0115 02:43:42.533345   14966 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0115 02:43:42.533382   14966 cache.go:56] Caching tarball of preloaded images
	I0115 02:43:42.533557   14966 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0115 02:43:42.535534   14966 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0115 02:43:42.535550   14966 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:43:42.692208   14966 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0115 02:43:58.429685   14966 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:43:58.429774   14966 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:43:59.323280   14966 cache.go:59] Finished verifying existence of preloaded tar for v1.16.0 on containerd
	I0115 02:43:59.323660   14966 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/download-only-151909/config.json ...
	I0115 02:43:59.323692   14966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/download-only-151909/config.json: {Name:mk28bd73879a1559fac4e11b343a8476922be16b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:43:59.323875   14966 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0115 02:43:59.324074   14966 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control-plane node download-only-151909 host does not exist
	  To start a cluster, run: "minikube start -p download-only-151909"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-151909
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (49.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-006146 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-006146 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (49.184049595s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (49.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-006146
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-006146: exit status 85 (74.762067ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-151909 | jenkins | v1.32.0 | 15 Jan 24 02:43 UTC |                     |
	|         | -p download-only-151909        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Jan 24 02:44 UTC | 15 Jan 24 02:44 UTC |
	| delete  | -p download-only-151909        | download-only-151909 | jenkins | v1.32.0 | 15 Jan 24 02:44 UTC | 15 Jan 24 02:44 UTC |
	| start   | -o=json --download-only        | download-only-006146 | jenkins | v1.32.0 | 15 Jan 24 02:44 UTC |                     |
	|         | -p download-only-006146        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 02:44:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 02:44:31.174915   15245 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:44:31.175164   15245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:44:31.175173   15245 out.go:309] Setting ErrFile to fd 2...
	I0115 02:44:31.175180   15245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:44:31.175381   15245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 02:44:31.175950   15245 out.go:303] Setting JSON to true
	I0115 02:44:31.176741   15245 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1616,"bootTime":1705285055,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 02:44:31.176796   15245 start.go:138] virtualization: kvm guest
	I0115 02:44:31.179025   15245 out.go:97] [download-only-006146] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 02:44:31.180528   15245 out.go:169] MINIKUBE_LOCATION=17909
	I0115 02:44:31.179186   15245 notify.go:220] Checking for updates...
	I0115 02:44:31.183358   15245 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 02:44:31.184941   15245 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:44:31.186421   15245 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:44:31.187837   15245 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 02:44:31.190440   15245 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 02:44:31.190629   15245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 02:44:31.220674   15245 out.go:97] Using the kvm2 driver based on user configuration
	I0115 02:44:31.220699   15245 start.go:296] selected driver: kvm2
	I0115 02:44:31.220703   15245 start.go:900] validating driver "kvm2" against <nil>
	I0115 02:44:31.220991   15245 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:44:31.221058   15245 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17909-7685/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 02:44:31.234426   15245 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 02:44:31.234498   15245 start_flags.go:308] no existing cluster config was found, will generate one from the flags 
	I0115 02:44:31.234914   15245 start_flags.go:391] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0115 02:44:31.235051   15245 start_flags.go:925] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 02:44:31.235095   15245 cni.go:84] Creating CNI manager for ""
	I0115 02:44:31.235111   15245 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 02:44:31.235120   15245 start_flags.go:317] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 02:44:31.235162   15245 start.go:339] cluster config:
	{Name:download-only-006146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-006146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 02:44:31.235242   15245 iso.go:125] acquiring lock: {Name:mk557eda9a6ce643c635b77cd4c9cb212ca64fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:44:31.236753   15245 out.go:97] Starting "download-only-006146" primary control-plane node in "download-only-006146" cluster
	I0115 02:44:31.236767   15245 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:44:31.878371   15245 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0115 02:44:31.878406   15245 cache.go:56] Caching tarball of preloaded images
	I0115 02:44:31.878552   15245 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:44:31.880596   15245 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0115 02:44:31.880612   15245 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:44:32.030666   15245 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0115 02:44:47.097600   15245 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:44:47.097689   15245 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:44:48.024876   15245 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0115 02:44:48.025219   15245 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/download-only-006146/config.json ...
	I0115 02:44:48.025247   15245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/download-only-006146/config.json: {Name:mk01088728bf96daa0934d6f71747419604363b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:44:48.025403   15245 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 02:44:48.025976   15245 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-006146 host does not exist
	  To start a cluster, run: "minikube start -p download-only-006146"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-006146
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (54.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-041054 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-041054 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (54.789656577s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (54.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-041054
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-041054: exit status 85 (71.44382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-151909 | jenkins | v1.32.0 | 15 Jan 24 02:43 UTC |                     |
	|         | -p download-only-151909           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 02:44 UTC | 15 Jan 24 02:44 UTC |
	| delete  | -p download-only-151909           | download-only-151909 | jenkins | v1.32.0 | 15 Jan 24 02:44 UTC | 15 Jan 24 02:44 UTC |
	| start   | -o=json --download-only           | download-only-006146 | jenkins | v1.32.0 | 15 Jan 24 02:44 UTC |                     |
	|         | -p download-only-006146           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 02:45 UTC | 15 Jan 24 02:45 UTC |
	| delete  | -p download-only-006146           | download-only-006146 | jenkins | v1.32.0 | 15 Jan 24 02:45 UTC | 15 Jan 24 02:45 UTC |
	| start   | -o=json --download-only           | download-only-041054 | jenkins | v1.32.0 | 15 Jan 24 02:45 UTC |                     |
	|         | -p download-only-041054           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 02:45:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 02:45:20.708031   15498 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:45:20.708168   15498 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:45:20.708180   15498 out.go:309] Setting ErrFile to fd 2...
	I0115 02:45:20.708187   15498 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:45:20.708370   15498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 02:45:20.708918   15498 out.go:303] Setting JSON to true
	I0115 02:45:20.709660   15498 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1666,"bootTime":1705285055,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 02:45:20.709715   15498 start.go:138] virtualization: kvm guest
	I0115 02:45:20.711865   15498 out.go:97] [download-only-041054] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 02:45:20.713420   15498 out.go:169] MINIKUBE_LOCATION=17909
	I0115 02:45:20.712046   15498 notify.go:220] Checking for updates...
	I0115 02:45:20.716288   15498 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 02:45:20.717737   15498 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:45:20.719074   15498 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:45:20.720418   15498 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 02:45:20.723207   15498 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 02:45:20.723419   15498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 02:45:20.754837   15498 out.go:97] Using the kvm2 driver based on user configuration
	I0115 02:45:20.754870   15498 start.go:296] selected driver: kvm2
	I0115 02:45:20.754876   15498 start.go:900] validating driver "kvm2" against <nil>
	I0115 02:45:20.755179   15498 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:45:20.755280   15498 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17909-7685/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 02:45:20.768585   15498 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 02:45:20.768676   15498 start_flags.go:308] no existing cluster config was found, will generate one from the flags 
	I0115 02:45:20.769135   15498 start_flags.go:391] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0115 02:45:20.769286   15498 start_flags.go:925] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 02:45:20.769341   15498 cni.go:84] Creating CNI manager for ""
	I0115 02:45:20.769359   15498 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 02:45:20.769374   15498 start_flags.go:317] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 02:45:20.769430   15498 start.go:339] cluster config:
	{Name:download-only-041054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-041054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 02:45:20.769531   15498 iso.go:125] acquiring lock: {Name:mk557eda9a6ce643c635b77cd4c9cb212ca64fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 02:45:20.771252   15498 out.go:97] Starting "download-only-041054" primary control-plane node in "download-only-041054" cluster
	I0115 02:45:20.771275   15498 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0115 02:45:21.413019   15498 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0115 02:45:21.413054   15498 cache.go:56] Caching tarball of preloaded images
	I0115 02:45:21.413226   15498 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0115 02:45:21.415152   15498 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0115 02:45:21.415168   15498 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:45:21.570722   15498 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0115 02:45:41.366004   15498 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:45:41.366097   15498 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17909-7685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0115 02:45:42.173802   15498 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on containerd
	I0115 02:45:42.174128   15498 profile.go:142] Saving config to /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/download-only-041054/config.json ...
	I0115 02:45:42.174158   15498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/download-only-041054/config.json: {Name:mk1cae26896410bd4c98bd5cf127438a3be55778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 02:45:42.174305   15498 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0115 02:45:42.174433   15498 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17909-7685/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-041054 host does not exist
	  To start a cluster, run: "minikube start -p download-only-041054"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-041054
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-172971 --alsologtostderr --binary-mirror http://127.0.0.1:33023 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-172971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-172971
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (126.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-918002 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-918002 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m5.669141693s)
helpers_test.go:175: Cleaning up "offline-containerd-918002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-918002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-918002: (1.071247746s)
--- PASS: TestOffline (126.74s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-974059
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-974059: exit status 85 (59.289127ms)

                                                
                                                
-- stdout --
	* Profile "addons-974059" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-974059"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-974059
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-974059: exit status 85 (61.283069ms)

                                                
                                                
-- stdout --
	* Profile "addons-974059" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-974059"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (216.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-974059 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-974059 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m36.718829336s)
--- PASS: TestAddons/Setup (216.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 16.217415ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lxlqs" [9e31c26e-4abb-4384-bc2e-5ea1be84e604] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006599222s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5ndqf" [5c77c7c4-f5be-4480-bea1-b1f2286e3b2b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006969699s
addons_test.go:340: (dbg) Run:  kubectl --context addons-974059 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-974059 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-974059 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.521662982s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 ip
2024/01/15 02:50:12 [DEBUG] GET http://192.168.39.115:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vl4cp" [67f5b423-bca3-43a3-9aac-a2eee1fa4621] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006439443s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-974059
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-974059: (6.021188481s)
--- PASS: TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.803044ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-mc2hw" [46aae371-3052-4919-8103-27e76a8d869a] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005674716s
addons_test.go:415: (dbg) Run:  kubectl --context addons-974059 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.97s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.67s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.190788ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-nbzmm" [9b81e3a3-b370-494f-9c93-3cb39b23a5fc] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004833565s
addons_test.go:473: (dbg) Run:  kubectl --context addons-974059 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-974059 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (23.998722985s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (30.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 18.557427ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-974059 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-974059 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [41f9e712-3dfb-450a-9d45-340815ca8f0e] Pending
helpers_test.go:344: "task-pv-pod" [41f9e712-3dfb-450a-9d45-340815ca8f0e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [41f9e712-3dfb-450a-9d45-340815ca8f0e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004312968s
addons_test.go:584: (dbg) Run:  kubectl --context addons-974059 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-974059 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-974059 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-974059 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-974059 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-974059 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-974059 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cff0e3a9-4232-47b7-bbe3-651b07f6caad] Pending
helpers_test.go:344: "task-pv-pod-restore" [cff0e3a9-4232-47b7-bbe3-651b07f6caad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cff0e3a9-4232-47b7-bbe3-651b07f6caad] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 27.004120284s
addons_test.go:626: (dbg) Run:  kubectl --context addons-974059 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-974059 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-974059 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-974059 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.005123034s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-amd64 -p addons-974059 addons disable volumesnapshots --alsologtostderr -v=1: (1.148911135s)
--- PASS: TestAddons/parallel/CSI (70.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (41.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-974059 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-974059 --alsologtostderr -v=1: (2.051133475s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-6w8xh" [a541103c-c8e9-438b-911f-548bfe200b82] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-6w8xh" [a541103c-c8e9-438b-911f-548bfe200b82] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-6w8xh" [a541103c-c8e9-438b-911f-548bfe200b82] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 39.004560839s
--- PASS: TestAddons/parallel/Headlamp (41.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-8n2vh" [0b5814b7-5e17-4135-bd5c-a84cf3dfe31d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005226462s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-974059
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-974059 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-974059 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d576dd14-1df8-428b-a8ab-3b1eccf2b55b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d576dd14-1df8-428b-a8ab-3b1eccf2b55b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d576dd14-1df8-428b-a8ab-3b1eccf2b55b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.005216569s
addons_test.go:891: (dbg) Run:  kubectl --context addons-974059 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 ssh "cat /opt/local-path-provisioner/pvc-1077ad20-ac07-4be2-a7fc-a7cbe9e3db68_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-974059 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-974059 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-974059 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-974059 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.54089488s)
--- PASS: TestAddons/parallel/LocalPath (58.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hq969" [7bed1f75-9fa1-4caa-bad7-a0809fe0e985] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.02811339s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-974059
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.81s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-dc8lk" [1c3e0301-e9aa-434c-b9e4-bf0a837f47d1] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00944819s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-974059 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-974059 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-974059
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-974059: (1m32.055268404s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-974059
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-974059
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-974059
--- PASS: TestAddons/StoppedEnableDisable (92.35s)

                                                
                                    
x
+
TestCertOptions (57.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-502275 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0115 03:57:34.543807   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-502275 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (55.545608571s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-502275 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-502275 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-502275 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-502275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-502275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-502275: (1.727008097s)
--- PASS: TestCertOptions (57.81s)

                                                
                                    
x
+
TestCertExpiration (270.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-024733 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-024733 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (57.58982587s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-024733 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-024733 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (31.734531833s)
helpers_test.go:175: Cleaning up "cert-expiration-024733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-024733
--- PASS: TestCertExpiration (270.25s)

                                                
                                    
x
+
TestForceSystemdFlag (51.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-663149 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-663149 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (50.097002993s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-663149 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-663149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-663149
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-663149: (1.082189265s)
--- PASS: TestForceSystemdFlag (51.39s)

                                                
                                    
x
+
TestForceSystemdEnv (99.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-916080 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-916080 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m38.59895384s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-916080 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-916080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-916080
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-916080: (1.034149742s)
--- PASS: TestForceSystemdEnv (99.86s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (22.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (22.25s)

                                                
                                    
x
+
TestErrorSpam/setup (48.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-671222 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-671222 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-671222 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-671222 --driver=kvm2  --container-runtime=containerd: (48.579018093s)
--- PASS: TestErrorSpam/setup (48.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (4.7s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 stop: (1.504722844s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 stop: (1.935368905s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-671222 --log_dir /tmp/nospam-671222 stop: (1.255517004s)
--- PASS: TestErrorSpam/stop (4.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17909-7685/.minikube/files/etc/test/nested/copy/14954/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (100.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195136 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0115 02:54:53.535702   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:53.541557   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:53.551771   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:53.572047   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:53.612330   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:53.692645   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:53.853060   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:54.173644   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:54.814563   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:56.095437   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:54:58.655796   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:55:03.775970   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:55:14.016505   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 02:55:34.497562   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-195136 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m40.183221441s)
--- PASS: TestFunctional/serial/StartWithProxy (100.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195136 --alsologtostderr -v=8
E0115 02:56:15.457724   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-195136 --alsologtostderr -v=8: (42.34879014s)
functional_test.go:659: soft start took 42.349455276s for "functional-195136" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-195136 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 cache add registry.k8s.io/pause:3.1: (1.201930658s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 cache add registry.k8s.io/pause:3.3: (1.437145701s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 cache add registry.k8s.io/pause:latest: (1.200240776s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-195136 /tmp/TestFunctionalserialCacheCmdcacheadd_local2982786412/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cache add minikube-local-cache-test:functional-195136
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 cache add minikube-local-cache-test:functional-195136: (3.648122354s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cache delete minikube-local-cache-test:functional-195136
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-195136
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (223.779945ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 cache reload: (1.137300087s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 kubectl -- --context functional-195136 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-195136 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195136 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-195136 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.029115399s)
functional_test.go:757: restart took 49.029227136s for "functional-195136" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-195136 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 logs: (1.400385757s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 logs --file /tmp/TestFunctionalserialLogsFileCmd1686183486/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 logs --file /tmp/TestFunctionalserialLogsFileCmd1686183486/001/logs.txt: (1.433570349s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-195136 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-195136
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-195136: exit status 115 (291.195979ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.185:32434 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-195136 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-195136 delete -f testdata/invalidsvc.yaml: (1.294874904s)
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 config get cpus: exit status 14 (55.202905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 config get cpus: exit status 14 (56.590583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-195136 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-195136 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23128: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195136 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-195136 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (138.314836ms)

                                                
                                                
-- stdout --
	* [functional-195136] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 02:58:05.219524   22919 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:58:05.219629   22919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:58:05.219637   22919 out.go:309] Setting ErrFile to fd 2...
	I0115 02:58:05.219642   22919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:58:05.219843   22919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 02:58:05.220335   22919 out.go:303] Setting JSON to false
	I0115 02:58:05.221204   22919 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2430,"bootTime":1705285055,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 02:58:05.221266   22919 start.go:138] virtualization: kvm guest
	I0115 02:58:05.223318   22919 out.go:177] * [functional-195136] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 02:58:05.225228   22919 notify.go:220] Checking for updates...
	I0115 02:58:05.225234   22919 out.go:177]   - MINIKUBE_LOCATION=17909
	I0115 02:58:05.226610   22919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 02:58:05.227999   22919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:58:05.229338   22919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:58:05.230588   22919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 02:58:05.231882   22919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 02:58:05.233619   22919 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:58:05.234082   22919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:58:05.234120   22919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:58:05.247924   22919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0115 02:58:05.248311   22919 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:58:05.248830   22919 main.go:141] libmachine: Using API Version  1
	I0115 02:58:05.248876   22919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:58:05.249210   22919 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:58:05.249429   22919 main.go:141] libmachine: (functional-195136) Calling .DriverName
	I0115 02:58:05.249663   22919 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 02:58:05.250055   22919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:58:05.250098   22919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:58:05.263908   22919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0115 02:58:05.264287   22919 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:58:05.264746   22919 main.go:141] libmachine: Using API Version  1
	I0115 02:58:05.264773   22919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:58:05.265063   22919 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:58:05.265217   22919 main.go:141] libmachine: (functional-195136) Calling .DriverName
	I0115 02:58:05.296890   22919 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 02:58:05.298240   22919 start.go:296] selected driver: kvm2
	I0115 02:58:05.298251   22919 start.go:900] validating driver "kvm2" against &{Name:functional-195136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-195136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 02:58:05.298368   22919 start.go:911] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 02:58:05.301393   22919 out.go:177] 
	W0115 02:58:05.303051   22919 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0115 02:58:05.304311   22919 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195136 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195136 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-195136 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (140.082864ms)

                                                
                                                
-- stdout --
	* [functional-195136] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 02:58:06.268423   23064 out.go:296] Setting OutFile to fd 1 ...
	I0115 02:58:06.268546   23064 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:58:06.268557   23064 out.go:309] Setting ErrFile to fd 2...
	I0115 02:58:06.268564   23064 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 02:58:06.268838   23064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 02:58:06.269390   23064 out.go:303] Setting JSON to false
	I0115 02:58:06.270282   23064 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2431,"bootTime":1705285055,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 02:58:06.270350   23064 start.go:138] virtualization: kvm guest
	I0115 02:58:06.272666   23064 out.go:177] * [functional-195136] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0115 02:58:06.274281   23064 out.go:177]   - MINIKUBE_LOCATION=17909
	I0115 02:58:06.275845   23064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 02:58:06.274298   23064 notify.go:220] Checking for updates...
	I0115 02:58:06.277310   23064 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 02:58:06.278828   23064 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 02:58:06.280202   23064 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 02:58:06.281568   23064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 02:58:06.283257   23064 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 02:58:06.283667   23064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:58:06.283736   23064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:58:06.298383   23064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41537
	I0115 02:58:06.298755   23064 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:58:06.299246   23064 main.go:141] libmachine: Using API Version  1
	I0115 02:58:06.299270   23064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:58:06.299589   23064 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:58:06.299759   23064 main.go:141] libmachine: (functional-195136) Calling .DriverName
	I0115 02:58:06.299960   23064 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 02:58:06.300254   23064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 02:58:06.300293   23064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 02:58:06.314372   23064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I0115 02:58:06.314710   23064 main.go:141] libmachine: () Calling .GetVersion
	I0115 02:58:06.315156   23064 main.go:141] libmachine: Using API Version  1
	I0115 02:58:06.315174   23064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 02:58:06.315457   23064 main.go:141] libmachine: () Calling .GetMachineName
	I0115 02:58:06.315626   23064 main.go:141] libmachine: (functional-195136) Calling .DriverName
	I0115 02:58:06.346917   23064 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0115 02:58:06.348154   23064 start.go:296] selected driver: kvm2
	I0115 02:58:06.348167   23064 start.go:900] validating driver "kvm2" against &{Name:functional-195136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-195136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 02:58:06.348305   23064 start.go:911] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 02:58:06.350611   23064 out.go:177] 
	W0115 02:58:06.351818   23064 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0115 02:58:06.353067   23064 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-195136 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-195136 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-hx5xc" [52169961-e590-464c-b255-caabce9fb879] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-hx5xc" [52169961-e590-464c-b255-caabce9fb879] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.005604515s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.185:32147
functional_test.go:1674: http://192.168.39.185:32147: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-hx5xc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.185:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.185:32147
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [01081916-eca6-499a-8b6a-a56cd3e9b121] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006170526s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-195136 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-195136 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-195136 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-195136 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-195136 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5bd43cc2-948b-476d-9313-134effebabf4] Pending
helpers_test.go:344: "sp-pod" [5bd43cc2-948b-476d-9313-134effebabf4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5bd43cc2-948b-476d-9313-134effebabf4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004474438s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-195136 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-195136 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-195136 delete -f testdata/storage-provisioner/pod.yaml: (1.286824556s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-195136 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7d0c038e-0f78-46af-9f1f-b44e8f1a6751] Pending
helpers_test.go:344: "sp-pod" [7d0c038e-0f78-46af-9f1f-b44e8f1a6751] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7d0c038e-0f78-46af-9f1f-b44e8f1a6751] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004703979s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-195136 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "echo hello"
E0115 02:57:37.377851   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh -n functional-195136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cp functional-195136:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2716061970/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh -n functional-195136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh -n functional-195136 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-195136 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-27p2d" [b5d0b681-b529-41a9-8693-7a934a5d0a3a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-27p2d" [b5d0b681-b529-41a9-8693-7a934a5d0a3a] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.00599675s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-195136 exec mysql-859648c796-27p2d -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-195136 exec mysql-859648c796-27p2d -- mysql -ppassword -e "show databases;": exit status 1 (384.845816ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-195136 exec mysql-859648c796-27p2d -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-195136 exec mysql-859648c796-27p2d -- mysql -ppassword -e "show databases;": exit status 1 (286.296116ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-195136 exec mysql-859648c796-27p2d -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-195136 exec mysql-859648c796-27p2d -- mysql -ppassword -e "show databases;": exit status 1 (262.256217ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-195136 exec mysql-859648c796-27p2d -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/14954/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo cat /etc/test/nested/copy/14954/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/14954.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo cat /etc/ssl/certs/14954.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/14954.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo cat /usr/share/ca-certificates/14954.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/149542.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo cat /etc/ssl/certs/149542.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/149542.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo cat /usr/share/ca-certificates/149542.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-195136 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 ssh "sudo systemctl is-active docker": exit status 1 (230.362404ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 ssh "sudo systemctl is-active crio": exit status 1 (214.174647ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195136 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-195136
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-195136
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195136 image ls --format short --alsologtostderr:
I0115 02:58:13.111973   23383 out.go:296] Setting OutFile to fd 1 ...
I0115 02:58:13.112232   23383 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:13.112241   23383 out.go:309] Setting ErrFile to fd 2...
I0115 02:58:13.112246   23383 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:13.112451   23383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
I0115 02:58:13.113147   23383 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:13.113257   23383 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:13.113708   23383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:13.113756   23383 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:13.128080   23383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
I0115 02:58:13.128498   23383 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:13.129039   23383 main.go:141] libmachine: Using API Version  1
I0115 02:58:13.129061   23383 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:13.129452   23383 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:13.129651   23383 main.go:141] libmachine: (functional-195136) Calling .GetState
I0115 02:58:13.131578   23383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:13.131624   23383 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:13.145446   23383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35909
I0115 02:58:13.145882   23383 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:13.146302   23383 main.go:141] libmachine: Using API Version  1
I0115 02:58:13.146326   23383 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:13.146736   23383 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:13.146956   23383 main.go:141] libmachine: (functional-195136) Calling .DriverName
I0115 02:58:13.147164   23383 ssh_runner.go:195] Run: systemctl --version
I0115 02:58:13.147193   23383 main.go:141] libmachine: (functional-195136) Calling .GetSSHHostname
I0115 02:58:13.150062   23383 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:13.150501   23383 main.go:141] libmachine: (functional-195136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:58", ip: ""} in network mk-functional-195136: {Iface:virbr1 ExpiryTime:2024-01-15 03:54:20 +0000 UTC Type:0 Mac:52:54:00:5c:9f:58 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:functional-195136 Clientid:01:52:54:00:5c:9f:58}
I0115 02:58:13.150535   23383 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined IP address 192.168.39.185 and MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:13.150682   23383 main.go:141] libmachine: (functional-195136) Calling .GetSSHPort
I0115 02:58:13.150853   23383 main.go:141] libmachine: (functional-195136) Calling .GetSSHKeyPath
I0115 02:58:13.150984   23383 main.go:141] libmachine: (functional-195136) Calling .GetSSHUsername
I0115 02:58:13.151107   23383 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/functional-195136/id_rsa Username:docker}
I0115 02:58:13.250678   23383 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 02:58:13.318572   23383 main.go:141] libmachine: Making call to close driver server
I0115 02:58:13.318587   23383 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:13.318821   23383 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:13.318840   23383 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 02:58:13.318851   23383 main.go:141] libmachine: Making call to close driver server
I0115 02:58:13.318850   23383 main.go:141] libmachine: (functional-195136) DBG | Closing plugin on server side
I0115 02:58:13.318861   23383 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:13.319065   23383 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:13.319077   23383 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195136 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| docker.io/library/nginx                     | latest             | sha256:a87587 | 70.5MB |
| gcr.io/google-containers/addon-resizer      | functional-195136  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/minikube-local-cache-test | functional-195136  | sha256:5f469a | 1.01kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195136 image ls --format table --alsologtostderr:
I0115 02:58:14.518290   23616 out.go:296] Setting OutFile to fd 1 ...
I0115 02:58:14.518435   23616 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:14.518447   23616 out.go:309] Setting ErrFile to fd 2...
I0115 02:58:14.518454   23616 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:14.518737   23616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
I0115 02:58:14.519559   23616 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:14.519661   23616 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:14.520059   23616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:14.520096   23616 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:14.535717   23616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
I0115 02:58:14.536122   23616 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:14.536713   23616 main.go:141] libmachine: Using API Version  1
I0115 02:58:14.536738   23616 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:14.537072   23616 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:14.537281   23616 main.go:141] libmachine: (functional-195136) Calling .GetState
I0115 02:58:14.539196   23616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:14.539230   23616 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:14.552998   23616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
I0115 02:58:14.553338   23616 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:14.553752   23616 main.go:141] libmachine: Using API Version  1
I0115 02:58:14.553769   23616 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:14.554023   23616 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:14.554204   23616 main.go:141] libmachine: (functional-195136) Calling .DriverName
I0115 02:58:14.554383   23616 ssh_runner.go:195] Run: systemctl --version
I0115 02:58:14.554413   23616 main.go:141] libmachine: (functional-195136) Calling .GetSSHHostname
I0115 02:58:14.556928   23616 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:14.557313   23616 main.go:141] libmachine: (functional-195136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:58", ip: ""} in network mk-functional-195136: {Iface:virbr1 ExpiryTime:2024-01-15 03:54:20 +0000 UTC Type:0 Mac:52:54:00:5c:9f:58 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:functional-195136 Clientid:01:52:54:00:5c:9f:58}
I0115 02:58:14.557342   23616 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined IP address 192.168.39.185 and MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:14.557461   23616 main.go:141] libmachine: (functional-195136) Calling .GetSSHPort
I0115 02:58:14.557641   23616 main.go:141] libmachine: (functional-195136) Calling .GetSSHKeyPath
I0115 02:58:14.557773   23616 main.go:141] libmachine: (functional-195136) Calling .GetSSHUsername
I0115 02:58:14.557935   23616 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/functional-195136/id_rsa Username:docker}
I0115 02:58:14.645759   23616 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 02:58:14.700773   23616 main.go:141] libmachine: Making call to close driver server
I0115 02:58:14.700792   23616 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:14.701082   23616 main.go:141] libmachine: (functional-195136) DBG | Closing plugin on server side
I0115 02:58:14.701125   23616 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:14.701145   23616 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 02:58:14.701161   23616 main.go:141] libmachine: Making call to close driver server
I0115 02:58:14.701173   23616 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:14.701455   23616 main.go:141] libmachine: (functional-195136) DBG | Closing plugin on server side
I0115 02:58:14.701475   23616 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:14.701496   23616 main.go:141] libmachine: Making call to close connection to plugin binary
2024/01/15 02:58:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195136 image ls --format json --alsologtostderr:
[{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:ead0a4a53df89fd173874b4609
3b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/
kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:a8758716bb6aa4d90071160d27028fe4ea
ee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"70520324"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5f469a56a4c1dbc92800e748a2671608532c8007bd510e25b06d6ff5efae9e78","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-195136"],"size":"1006"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-195136"],"size":"10823156"},{"id"
:"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195136 image ls --format json --alsologtostderr:
I0115 02:58:14.223086   23592 out.go:296] Setting OutFile to fd 1 ...
I0115 02:58:14.223251   23592 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:14.223263   23592 out.go:309] Setting ErrFile to fd 2...
I0115 02:58:14.223271   23592 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:14.223537   23592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
I0115 02:58:14.224256   23592 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:14.224405   23592 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:14.225022   23592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:14.225079   23592 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:14.239764   23592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
I0115 02:58:14.240273   23592 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:14.240893   23592 main.go:141] libmachine: Using API Version  1
I0115 02:58:14.240913   23592 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:14.241403   23592 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:14.241646   23592 main.go:141] libmachine: (functional-195136) Calling .GetState
I0115 02:58:14.243463   23592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:14.243507   23592 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:14.257491   23592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41371
I0115 02:58:14.257850   23592 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:14.258262   23592 main.go:141] libmachine: Using API Version  1
I0115 02:58:14.258285   23592 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:14.258639   23592 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:14.258845   23592 main.go:141] libmachine: (functional-195136) Calling .DriverName
I0115 02:58:14.259069   23592 ssh_runner.go:195] Run: systemctl --version
I0115 02:58:14.259094   23592 main.go:141] libmachine: (functional-195136) Calling .GetSSHHostname
I0115 02:58:14.261502   23592 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:14.261954   23592 main.go:141] libmachine: (functional-195136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:58", ip: ""} in network mk-functional-195136: {Iface:virbr1 ExpiryTime:2024-01-15 03:54:20 +0000 UTC Type:0 Mac:52:54:00:5c:9f:58 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:functional-195136 Clientid:01:52:54:00:5c:9f:58}
I0115 02:58:14.261984   23592 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined IP address 192.168.39.185 and MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:14.262091   23592 main.go:141] libmachine: (functional-195136) Calling .GetSSHPort
I0115 02:58:14.262250   23592 main.go:141] libmachine: (functional-195136) Calling .GetSSHKeyPath
I0115 02:58:14.262399   23592 main.go:141] libmachine: (functional-195136) Calling .GetSSHUsername
I0115 02:58:14.262559   23592 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/functional-195136/id_rsa Username:docker}
I0115 02:58:14.358799   23592 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 02:58:14.452897   23592 main.go:141] libmachine: Making call to close driver server
I0115 02:58:14.452915   23592 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:14.453197   23592 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:14.453230   23592 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 02:58:14.453249   23592 main.go:141] libmachine: Making call to close driver server
I0115 02:58:14.453260   23592 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:14.453259   23592 main.go:141] libmachine: (functional-195136) DBG | Closing plugin on server side
I0115 02:58:14.453468   23592 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:14.453490   23592 main.go:141] libmachine: (functional-195136) DBG | Closing plugin on server side
I0115 02:58:14.453498   23592 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195136 image ls --format yaml --alsologtostderr:
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "70520324"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-195136
size: "10823156"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:5f469a56a4c1dbc92800e748a2671608532c8007bd510e25b06d6ff5efae9e78
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-195136
size: "1006"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195136 image ls --format yaml --alsologtostderr:
I0115 02:58:13.379993   23431 out.go:296] Setting OutFile to fd 1 ...
I0115 02:58:13.380146   23431 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:13.380156   23431 out.go:309] Setting ErrFile to fd 2...
I0115 02:58:13.380163   23431 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:13.380365   23431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
I0115 02:58:13.380969   23431 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:13.381093   23431 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:13.381518   23431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:13.381567   23431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:13.395440   23431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
I0115 02:58:13.395850   23431 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:13.396409   23431 main.go:141] libmachine: Using API Version  1
I0115 02:58:13.396429   23431 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:13.396751   23431 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:13.396952   23431 main.go:141] libmachine: (functional-195136) Calling .GetState
I0115 02:58:13.398677   23431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:13.398720   23431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:13.412478   23431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
I0115 02:58:13.412862   23431 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:13.413298   23431 main.go:141] libmachine: Using API Version  1
I0115 02:58:13.413317   23431 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:13.413667   23431 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:13.413853   23431 main.go:141] libmachine: (functional-195136) Calling .DriverName
I0115 02:58:13.414037   23431 ssh_runner.go:195] Run: systemctl --version
I0115 02:58:13.414059   23431 main.go:141] libmachine: (functional-195136) Calling .GetSSHHostname
I0115 02:58:13.416771   23431 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:13.417195   23431 main.go:141] libmachine: (functional-195136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:58", ip: ""} in network mk-functional-195136: {Iface:virbr1 ExpiryTime:2024-01-15 03:54:20 +0000 UTC Type:0 Mac:52:54:00:5c:9f:58 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:functional-195136 Clientid:01:52:54:00:5c:9f:58}
I0115 02:58:13.417234   23431 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined IP address 192.168.39.185 and MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:13.417395   23431 main.go:141] libmachine: (functional-195136) Calling .GetSSHPort
I0115 02:58:13.417591   23431 main.go:141] libmachine: (functional-195136) Calling .GetSSHKeyPath
I0115 02:58:13.417745   23431 main.go:141] libmachine: (functional-195136) Calling .GetSSHUsername
I0115 02:58:13.417889   23431 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/functional-195136/id_rsa Username:docker}
I0115 02:58:13.518126   23431 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 02:58:13.576862   23431 main.go:141] libmachine: Making call to close driver server
I0115 02:58:13.576884   23431 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:13.577136   23431 main.go:141] libmachine: (functional-195136) DBG | Closing plugin on server side
I0115 02:58:13.577148   23431 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:13.577162   23431 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 02:58:13.577174   23431 main.go:141] libmachine: Making call to close driver server
I0115 02:58:13.577184   23431 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:13.577400   23431 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:13.577414   23431 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 ssh pgrep buildkitd: exit status 1 (220.516272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image build -t localhost/my-image:functional-195136 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 image build -t localhost/my-image:functional-195136 testdata/build --alsologtostderr: (5.341620202s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195136 image build -t localhost/my-image:functional-195136 testdata/build --alsologtostderr:
I0115 02:58:13.869360   23527 out.go:296] Setting OutFile to fd 1 ...
I0115 02:58:13.869716   23527 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:13.869730   23527 out.go:309] Setting ErrFile to fd 2...
I0115 02:58:13.869738   23527 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 02:58:13.870038   23527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
I0115 02:58:13.870797   23527 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:13.871342   23527 config.go:182] Loaded profile config "functional-195136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 02:58:13.872115   23527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:13.872187   23527 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:13.888067   23527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
I0115 02:58:13.888545   23527 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:13.889152   23527 main.go:141] libmachine: Using API Version  1
I0115 02:58:13.889186   23527 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:13.889573   23527 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:13.889740   23527 main.go:141] libmachine: (functional-195136) Calling .GetState
I0115 02:58:13.891610   23527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 02:58:13.891653   23527 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 02:58:13.905940   23527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
I0115 02:58:13.906322   23527 main.go:141] libmachine: () Calling .GetVersion
I0115 02:58:13.906869   23527 main.go:141] libmachine: Using API Version  1
I0115 02:58:13.906899   23527 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 02:58:13.907222   23527 main.go:141] libmachine: () Calling .GetMachineName
I0115 02:58:13.907434   23527 main.go:141] libmachine: (functional-195136) Calling .DriverName
I0115 02:58:13.907652   23527 ssh_runner.go:195] Run: systemctl --version
I0115 02:58:13.907674   23527 main.go:141] libmachine: (functional-195136) Calling .GetSSHHostname
I0115 02:58:13.910528   23527 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:13.910987   23527 main.go:141] libmachine: (functional-195136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:58", ip: ""} in network mk-functional-195136: {Iface:virbr1 ExpiryTime:2024-01-15 03:54:20 +0000 UTC Type:0 Mac:52:54:00:5c:9f:58 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:functional-195136 Clientid:01:52:54:00:5c:9f:58}
I0115 02:58:13.911024   23527 main.go:141] libmachine: (functional-195136) DBG | domain functional-195136 has defined IP address 192.168.39.185 and MAC address 52:54:00:5c:9f:58 in network mk-functional-195136
I0115 02:58:13.911128   23527 main.go:141] libmachine: (functional-195136) Calling .GetSSHPort
I0115 02:58:13.911285   23527 main.go:141] libmachine: (functional-195136) Calling .GetSSHKeyPath
I0115 02:58:13.911641   23527 main.go:141] libmachine: (functional-195136) Calling .GetSSHUsername
I0115 02:58:13.911785   23527 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/functional-195136/id_rsa Username:docker}
I0115 02:58:14.005112   23527 build_images.go:151] Building image from path: /tmp/build.1246985126.tar
I0115 02:58:14.005185   23527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0115 02:58:14.014365   23527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1246985126.tar
I0115 02:58:14.019707   23527 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1246985126.tar: stat -c "%s %y" /var/lib/minikube/build/build.1246985126.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1246985126.tar': No such file or directory
I0115 02:58:14.019736   23527 ssh_runner.go:362] scp /tmp/build.1246985126.tar --> /var/lib/minikube/build/build.1246985126.tar (3072 bytes)
I0115 02:58:14.059320   23527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1246985126
I0115 02:58:14.075864   23527 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1246985126 -xf /var/lib/minikube/build/build.1246985126.tar
I0115 02:58:14.084359   23527 containerd.go:379] Building image: /var/lib/minikube/build/build.1246985126
I0115 02:58:14.084420   23527 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1246985126 --local dockerfile=/var/lib/minikube/build/build.1246985126 --output type=image,name=localhost/my-image:functional-195136
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context:
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1d3518aeb302e965a97d0d6f181ccd3d04d563c040f79b82d8ae08cae61cb242 0.0s done
#8 exporting config sha256:0afb8a7250d602e0d2e621174985219afeacf3c161b85c30b1443ee0c1677475 0.0s done
#8 naming to localhost/my-image:functional-195136
#8 naming to localhost/my-image:functional-195136 done
#8 DONE 0.2s
I0115 02:58:19.092637   23527 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1246985126 --local dockerfile=/var/lib/minikube/build/build.1246985126 --output type=image,name=localhost/my-image:functional-195136: (5.008178929s)
I0115 02:58:19.092719   23527 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1246985126
I0115 02:58:19.105526   23527 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1246985126.tar
I0115 02:58:19.139635   23527 build_images.go:207] Built localhost/my-image:functional-195136 from /tmp/build.1246985126.tar
I0115 02:58:19.139665   23527 build_images.go:123] succeeded building to: functional-195136
I0115 02:58:19.139669   23527 build_images.go:124] failed building to: 
I0115 02:58:19.139693   23527 main.go:141] libmachine: Making call to close driver server
I0115 02:58:19.139714   23527 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:19.139969   23527 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:19.139985   23527 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 02:58:19.139995   23527 main.go:141] libmachine: Making call to close driver server
I0115 02:58:19.140003   23527 main.go:141] libmachine: (functional-195136) Calling .Close
I0115 02:58:19.140290   23527 main.go:141] libmachine: Successfully made call to close driver server
I0115 02:58:19.140304   23527 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 02:58:19.140322   23527 main.go:141] libmachine: (functional-195136) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.493776217s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-195136
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image load --daemon gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 image load --daemon gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr: (4.994661065s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdany-port3883734359/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705287457812701622" to /tmp/TestFunctionalparallelMountCmdany-port3883734359/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705287457812701622" to /tmp/TestFunctionalparallelMountCmdany-port3883734359/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705287457812701622" to /tmp/TestFunctionalparallelMountCmdany-port3883734359/001/test-1705287457812701622
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.321447ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 15 02:57 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 15 02:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 15 02:57 test-1705287457812701622
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh cat /mount-9p/test-1705287457812701622
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-195136 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f9be0760-895a-4f6b-9318-a0b4d70cce1e] Pending
helpers_test.go:344: "busybox-mount" [f9be0760-895a-4f6b-9318-a0b4d70cce1e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f9be0760-895a-4f6b-9318-a0b4d70cce1e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f9be0760-895a-4f6b-9318-a0b4d70cce1e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.004678964s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-195136 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdany-port3883734359/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image load --daemon gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 image load --daemon gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr: (2.536859887s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.50228633s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-195136
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image load --daemon gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 image load --daemon gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr: (4.516070566s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image save gcr.io/google-containers/addon-resizer:functional-195136 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 image save gcr.io/google-containers/addon-resizer:functional-195136 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.200880597s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image rm gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.396410409s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-195136
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 image save --daemon gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 image save --daemon gcr.io/google-containers/addon-resizer:functional-195136 --alsologtostderr: (1.818513229s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-195136
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdspecific-port3670433439/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (322.005329ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdspecific-port3670433439/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195136 ssh "sudo umount -f /mount-9p": exit status 1 (257.795228ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-195136 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdspecific-port3670433439/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup416453685/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup416453685/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup416453685/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-195136 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup416453685/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup416453685/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup416453685/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-195136 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-195136 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-xnxbs" [61343889-a2e7-4d55-8f58-a6228f560fed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-xnxbs" [61343889-a2e7-4d55-8f58-a6228f560fed] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005034688s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "265.976796ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "69.072449ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "234.756197ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "56.665814ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 service list: (1.256089789s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-195136 service list -o json: (1.283040627s)
functional_test.go:1493: Took "1.283144463s" to run "out/minikube-linux-amd64 -p functional-195136 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.185:31899
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-195136 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.185:31899
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-195136
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-195136
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-195136
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestHA/serial/StartCluster (282.35s)

                                                
                                                
=== RUN   TestHA/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-680410 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0115 02:59:53.534373   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 03:00:21.218912   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 03:02:34.543324   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:34.548612   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:34.558910   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:34.579148   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:34.619420   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:34.699720   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:34.860122   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:35.180560   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:35.821542   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:37.101774   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:39.663527   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:44.784375   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:02:55.024659   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-680410 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m41.672648667s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
--- PASS: TestHA/serial/StartCluster (282.35s)

                                                
                                    
x
+
TestHA/serial/DeployApp (39.49s)

                                                
                                                
=== RUN   TestHA/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- rollout status deployment/busybox
E0115 03:03:15.505791   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-680410 -- rollout status deployment/busybox: (36.942169036s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-g7qsd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-h2zgj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-xq99z -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-g7qsd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-h2zgj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-xq99z -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-g7qsd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-h2zgj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-xq99z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestHA/serial/DeployApp (39.49s)

                                                
                                    
x
+
TestHA/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestHA/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-g7qsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-g7qsd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-h2zgj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-h2zgj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-xq99z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-680410 -- exec busybox-5bc68d56bd-xq99z -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestHA/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestHA/serial/AddWorkerNode (49.85s)

                                                
                                                
=== RUN   TestHA/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-680410 -v=7 --alsologtostderr
E0115 03:03:56.466498   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-680410 -v=7 --alsologtostderr: (48.972758435s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
--- PASS: TestHA/serial/AddWorkerNode (49.85s)

                                                
                                    
x
+
TestHA/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestHA/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-680410 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestHA/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestHA/serial/HAppyAfterClusterStart (0.6s)

                                                
                                                
=== RUN   TestHA/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestHA/serial/HAppyAfterClusterStart (0.60s)

                                                
                                    
x
+
TestHA/serial/CopyFile (13.64s)

                                                
                                                
=== RUN   TestHA/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp testdata/cp-test.txt ha-680410:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410:/home/docker/cp-test.txt /tmp/TestHAserialCopyFile2725737547/001/cp-test_ha-680410.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410:/home/docker/cp-test.txt ha-680410-m02:/home/docker/cp-test_ha-680410_ha-680410-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m02 "sudo cat /home/docker/cp-test_ha-680410_ha-680410-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410:/home/docker/cp-test.txt ha-680410-m03:/home/docker/cp-test_ha-680410_ha-680410-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m03 "sudo cat /home/docker/cp-test_ha-680410_ha-680410-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410:/home/docker/cp-test.txt ha-680410-m04:/home/docker/cp-test_ha-680410_ha-680410-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m04 "sudo cat /home/docker/cp-test_ha-680410_ha-680410-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp testdata/cp-test.txt ha-680410-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m02:/home/docker/cp-test.txt /tmp/TestHAserialCopyFile2725737547/001/cp-test_ha-680410-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m02:/home/docker/cp-test.txt ha-680410:/home/docker/cp-test_ha-680410-m02_ha-680410.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410 "sudo cat /home/docker/cp-test_ha-680410-m02_ha-680410.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m02:/home/docker/cp-test.txt ha-680410-m03:/home/docker/cp-test_ha-680410-m02_ha-680410-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m03 "sudo cat /home/docker/cp-test_ha-680410-m02_ha-680410-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m02:/home/docker/cp-test.txt ha-680410-m04:/home/docker/cp-test_ha-680410-m02_ha-680410-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m04 "sudo cat /home/docker/cp-test_ha-680410-m02_ha-680410-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp testdata/cp-test.txt ha-680410-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt /tmp/TestHAserialCopyFile2725737547/001/cp-test_ha-680410-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt ha-680410:/home/docker/cp-test_ha-680410-m03_ha-680410.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410 "sudo cat /home/docker/cp-test_ha-680410-m03_ha-680410.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt ha-680410-m02:/home/docker/cp-test_ha-680410-m03_ha-680410-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m02 "sudo cat /home/docker/cp-test_ha-680410-m03_ha-680410-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m03:/home/docker/cp-test.txt ha-680410-m04:/home/docker/cp-test_ha-680410-m03_ha-680410-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m04 "sudo cat /home/docker/cp-test_ha-680410-m03_ha-680410-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp testdata/cp-test.txt ha-680410-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt /tmp/TestHAserialCopyFile2725737547/001/cp-test_ha-680410-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt ha-680410:/home/docker/cp-test_ha-680410-m04_ha-680410.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410 "sudo cat /home/docker/cp-test_ha-680410-m04_ha-680410.txt"
E0115 03:04:53.534752   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt ha-680410-m02:/home/docker/cp-test_ha-680410-m04_ha-680410-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m02 "sudo cat /home/docker/cp-test_ha-680410-m04_ha-680410-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 cp ha-680410-m04:/home/docker/cp-test.txt ha-680410-m03:/home/docker/cp-test_ha-680410-m04_ha-680410-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 ssh -n ha-680410-m03 "sudo cat /home/docker/cp-test_ha-680410-m04_ha-680410-m03.txt"
--- PASS: TestHA/serial/CopyFile (13.64s)

                                                
                                    
x
+
TestHA/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestHA/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.508201383s)
--- PASS: TestHA/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestHA/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                                
=== RUN   TestHA/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestHA/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                    
x
+
TestHA/serial/RestartClusterKeepsNodes (394.44s)

                                                
                                                
=== RUN   TestHA/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-680410 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-680410 -v=7 --alsologtostderr
E0115 03:07:34.543755   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:08:02.227585   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:09:53.534476   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-680410 -v=7 --alsologtostderr: (3m6.637523338s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-680410 --wait=true -v=7 --alsologtostderr
E0115 03:11:16.579680   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 03:12:34.543582   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-680410 --wait=true -v=7 --alsologtostderr: (3m27.682837942s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-680410
--- PASS: TestHA/serial/RestartClusterKeepsNodes (394.44s)

                                                
                                    
x
+
TestHA/serial/DeleteSecondaryNode (7.98s)

                                                
                                                
=== RUN   TestHA/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-680410 node delete m03 -v=7 --alsologtostderr: (7.202953022s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestHA/serial/DeleteSecondaryNode (7.98s)

                                                
                                    
x
+
TestHA/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestHA/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestHA/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestHA/serial/StopCluster (275.33s)

                                                
                                                
=== RUN   TestHA/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 stop -v=7 --alsologtostderr
E0115 03:14:53.533880   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 03:17:34.543141   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-680410 stop -v=7 --alsologtostderr: (4m35.225626559s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr: exit status 7 (104.048887ms)

                                                
                                                
-- stdout --
	ha-680410
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-680410-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:18:35.932256   32065 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:18:35.932403   32065 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:18:35.932413   32065 out.go:309] Setting ErrFile to fd 2...
	I0115 03:18:35.932418   32065 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:18:35.932627   32065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:18:35.932833   32065 out.go:303] Setting JSON to false
	I0115 03:18:35.932870   32065 mustload.go:65] Loading cluster: ha-680410
	I0115 03:18:35.932985   32065 notify.go:220] Checking for updates...
	I0115 03:18:35.933334   32065 config.go:182] Loaded profile config "ha-680410": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:18:35.933349   32065 status.go:255] checking status of ha-680410 ...
	I0115 03:18:35.933753   32065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:18:35.933825   32065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:18:35.948113   32065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I0115 03:18:35.948439   32065 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:18:35.948992   32065 main.go:141] libmachine: Using API Version  1
	I0115 03:18:35.949019   32065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:18:35.949328   32065 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:18:35.949495   32065 main.go:141] libmachine: (ha-680410) Calling .GetState
	I0115 03:18:35.950791   32065 status.go:330] ha-680410 host status = "Stopped" (err=<nil>)
	I0115 03:18:35.950804   32065 status.go:343] host is not running, skipping remaining checks
	I0115 03:18:35.950809   32065 status.go:257] ha-680410 status: &{Name:ha-680410 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:18:35.950835   32065 status.go:255] checking status of ha-680410-m02 ...
	I0115 03:18:35.951094   32065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:18:35.951124   32065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:18:35.964317   32065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I0115 03:18:35.964645   32065 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:18:35.965027   32065 main.go:141] libmachine: Using API Version  1
	I0115 03:18:35.965047   32065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:18:35.965390   32065 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:18:35.965545   32065 main.go:141] libmachine: (ha-680410-m02) Calling .GetState
	I0115 03:18:35.966860   32065 status.go:330] ha-680410-m02 host status = "Stopped" (err=<nil>)
	I0115 03:18:35.966872   32065 status.go:343] host is not running, skipping remaining checks
	I0115 03:18:35.966878   32065 status.go:257] ha-680410-m02 status: &{Name:ha-680410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:18:35.966922   32065 status.go:255] checking status of ha-680410-m04 ...
	I0115 03:18:35.967188   32065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:18:35.967226   32065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:18:35.980039   32065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0115 03:18:35.980340   32065 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:18:35.980666   32065 main.go:141] libmachine: Using API Version  1
	I0115 03:18:35.980694   32065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:18:35.981001   32065 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:18:35.981162   32065 main.go:141] libmachine: (ha-680410-m04) Calling .GetState
	I0115 03:18:35.982423   32065 status.go:330] ha-680410-m04 host status = "Stopped" (err=<nil>)
	I0115 03:18:35.982438   32065 status.go:343] host is not running, skipping remaining checks
	I0115 03:18:35.982443   32065 status.go:257] ha-680410-m04 status: &{Name:ha-680410-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestHA/serial/StopCluster (275.33s)

                                                
                                    
x
+
TestHA/serial/RestartCluster (157.09s)

                                                
                                                
=== RUN   TestHA/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-680410 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0115 03:18:57.588203   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
E0115 03:19:53.534424   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-680410 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m36.282309007s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestHA/serial/RestartCluster (157.09s)

                                                
                                    
x
+
TestHA/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestHA/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestHA/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestHA/serial/AddSecondaryNode (74.32s)

                                                
                                                
=== RUN   TestHA/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-680410 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-680410 --control-plane -v=7 --alsologtostderr: (1m13.469368111s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-680410 status -v=7 --alsologtostderr
--- PASS: TestHA/serial/AddSecondaryNode (74.32s)

                                                
                                    
x
+
TestHA/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                                
=== RUN   TestHA/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestHA/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (94.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-385885 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0115 03:22:34.543525   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-385885 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m34.635811224s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (94.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-385885 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-385885 addons enable ingress --alsologtostderr -v=5: (12.423954622s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-385885 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-385885 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-385885 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.544849406s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-385885 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-385885 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3fdf3093-bc37-40ff-abc2-352315700770] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3fdf3093-bc37-40ff-abc2-352315700770] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.003665195s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-385885 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-385885 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-385885 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.160
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-385885 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-385885 addons disable ingress-dns --alsologtostderr -v=1: (6.17656057s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-385885 addons disable ingress --alsologtostderr -v=1
E0115 03:24:53.534315   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-385885 addons disable ingress --alsologtostderr -v=1: (7.569199777s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.50s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-715553 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-715553 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m40.642972018s)
--- PASS: TestJSONOutput/start/Command (100.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-715553 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-715553 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.24s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-715553 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-715553 --output=json --user=testUser: (7.243253219s)
--- PASS: TestJSONOutput/stop/Command (7.24s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-999690 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-999690 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.621579ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee00bffb-cb8d-4df8-8607-af9bdf47acbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-999690] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e2cc5fc-fa79-441b-8abb-ce3e4db8d450","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17909"}}
	{"specversion":"1.0","id":"ce824f22-aaa7-4932-8007-e3bd696fa31d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cc739952-cdb1-416c-bc9f-e97ecfe5affb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig"}}
	{"specversion":"1.0","id":"a911ef0f-65e9-4cfb-8208-9ef27c6c92df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube"}}
	{"specversion":"1.0","id":"f184f72b-bd7d-44c7-832a-bdfabefd642e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"314c15ac-5fd6-4506-bf62-e42bb131f198","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"790f99c0-426e-4baa-990c-0fabd24fb0c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-999690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-999690
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (102.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-024824 --driver=kvm2  --container-runtime=containerd
E0115 03:27:34.543147   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-024824 --driver=kvm2  --container-runtime=containerd: (45.858264707s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-027010 --driver=kvm2  --container-runtime=containerd
E0115 03:27:56.580524   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-027010 --driver=kvm2  --container-runtime=containerd: (53.240089128s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-024824
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-027010
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-027010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-027010
helpers_test.go:175: Cleaning up "first-024824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-024824
--- PASS: TestMinikubeProfile (102.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-760644 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-760644 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.037285818s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-760644 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-760644 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-776423 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0115 03:29:22.059473   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:22.064713   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:22.074924   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:22.095145   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:22.135365   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:22.215668   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:22.376046   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:22.696613   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:23.337483   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:24.617663   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:27.178423   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:32.299342   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-776423 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.290341856s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776423 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776423 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-760644 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776423 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776423 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.50s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-776423
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-776423: (1.417892538s)
--- PASS: TestMountStart/serial/Stop (1.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-776423
E0115 03:29:42.539769   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:29:53.534415   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-776423: (24.322731651s)
E0115 03:30:03.020549   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (25.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776423 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776423 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-995684 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0115 03:30:43.980738   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-995684 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m48.520682501s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-995684 -- rollout status deployment/busybox: (4.431033168s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-2xszl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-zprb9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-2xszl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-zprb9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-2xszl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-zprb9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-2xszl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-2xszl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-zprb9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-995684 -- exec busybox-5bc68d56bd-zprb9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-995684 -v 3 --alsologtostderr
E0115 03:32:05.901507   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:32:34.543503   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-995684 -v 3 --alsologtostderr: (43.277117953s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-995684 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp testdata/cp-test.txt multinode-995684:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1584275264/001/cp-test_multinode-995684.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684:/home/docker/cp-test.txt multinode-995684-m02:/home/docker/cp-test_multinode-995684_multinode-995684-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m02 "sudo cat /home/docker/cp-test_multinode-995684_multinode-995684-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684:/home/docker/cp-test.txt multinode-995684-m03:/home/docker/cp-test_multinode-995684_multinode-995684-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m03 "sudo cat /home/docker/cp-test_multinode-995684_multinode-995684-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp testdata/cp-test.txt multinode-995684-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1584275264/001/cp-test_multinode-995684-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684-m02:/home/docker/cp-test.txt multinode-995684:/home/docker/cp-test_multinode-995684-m02_multinode-995684.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684 "sudo cat /home/docker/cp-test_multinode-995684-m02_multinode-995684.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684-m02:/home/docker/cp-test.txt multinode-995684-m03:/home/docker/cp-test_multinode-995684-m02_multinode-995684-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m03 "sudo cat /home/docker/cp-test_multinode-995684-m02_multinode-995684-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp testdata/cp-test.txt multinode-995684-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1584275264/001/cp-test_multinode-995684-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684-m03:/home/docker/cp-test.txt multinode-995684:/home/docker/cp-test_multinode-995684-m03_multinode-995684.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684 "sudo cat /home/docker/cp-test_multinode-995684-m03_multinode-995684.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 cp multinode-995684-m03:/home/docker/cp-test.txt multinode-995684-m02:/home/docker/cp-test_multinode-995684-m03_multinode-995684-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 ssh -n multinode-995684-m02 "sudo cat /home/docker/cp-test_multinode-995684-m03_multinode-995684-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-995684 node stop m03: (1.404491107s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-995684 status: exit status 7 (440.088752ms)

                                                
                                                
-- stdout --
	multinode-995684
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-995684-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-995684-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-995684 status --alsologtostderr: exit status 7 (436.949969ms)

                                                
                                                
-- stdout --
	multinode-995684
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-995684-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-995684-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:32:54.726207   40060 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:32:54.726351   40060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:32:54.726361   40060 out.go:309] Setting ErrFile to fd 2...
	I0115 03:32:54.726365   40060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:32:54.726659   40060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:32:54.726858   40060 out.go:303] Setting JSON to false
	I0115 03:32:54.726899   40060 mustload.go:65] Loading cluster: multinode-995684
	I0115 03:32:54.727001   40060 notify.go:220] Checking for updates...
	I0115 03:32:54.727450   40060 config.go:182] Loaded profile config "multinode-995684": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:32:54.727476   40060 status.go:255] checking status of multinode-995684 ...
	I0115 03:32:54.727922   40060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:32:54.727956   40060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:32:54.753443   40060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I0115 03:32:54.753821   40060 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:32:54.754302   40060 main.go:141] libmachine: Using API Version  1
	I0115 03:32:54.754321   40060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:32:54.754668   40060 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:32:54.754873   40060 main.go:141] libmachine: (multinode-995684) Calling .GetState
	I0115 03:32:54.756381   40060 status.go:330] multinode-995684 host status = "Running" (err=<nil>)
	I0115 03:32:54.756400   40060 host.go:66] Checking if "multinode-995684" exists ...
	I0115 03:32:54.756717   40060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:32:54.756750   40060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:32:54.770342   40060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I0115 03:32:54.770722   40060 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:32:54.771179   40060 main.go:141] libmachine: Using API Version  1
	I0115 03:32:54.771199   40060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:32:54.771519   40060 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:32:54.771681   40060 main.go:141] libmachine: (multinode-995684) Calling .GetIP
	I0115 03:32:54.773947   40060 main.go:141] libmachine: (multinode-995684) DBG | domain multinode-995684 has defined MAC address 52:54:00:11:f0:e4 in network mk-multinode-995684
	I0115 03:32:54.774273   40060 main.go:141] libmachine: (multinode-995684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:f0:e4", ip: ""} in network mk-multinode-995684: {Iface:virbr1 ExpiryTime:2024-01-15 04:30:20 +0000 UTC Type:0 Mac:52:54:00:11:f0:e4 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-995684 Clientid:01:52:54:00:11:f0:e4}
	I0115 03:32:54.774316   40060 main.go:141] libmachine: (multinode-995684) DBG | domain multinode-995684 has defined IP address 192.168.39.109 and MAC address 52:54:00:11:f0:e4 in network mk-multinode-995684
	I0115 03:32:54.774421   40060 host.go:66] Checking if "multinode-995684" exists ...
	I0115 03:32:54.774700   40060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:32:54.774754   40060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:32:54.788356   40060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0115 03:32:54.788675   40060 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:32:54.789064   40060 main.go:141] libmachine: Using API Version  1
	I0115 03:32:54.789080   40060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:32:54.789342   40060 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:32:54.789542   40060 main.go:141] libmachine: (multinode-995684) Calling .DriverName
	I0115 03:32:54.789766   40060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:32:54.789795   40060 main.go:141] libmachine: (multinode-995684) Calling .GetSSHHostname
	I0115 03:32:54.792379   40060 main.go:141] libmachine: (multinode-995684) DBG | domain multinode-995684 has defined MAC address 52:54:00:11:f0:e4 in network mk-multinode-995684
	I0115 03:32:54.792721   40060 main.go:141] libmachine: (multinode-995684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:f0:e4", ip: ""} in network mk-multinode-995684: {Iface:virbr1 ExpiryTime:2024-01-15 04:30:20 +0000 UTC Type:0 Mac:52:54:00:11:f0:e4 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-995684 Clientid:01:52:54:00:11:f0:e4}
	I0115 03:32:54.792742   40060 main.go:141] libmachine: (multinode-995684) DBG | domain multinode-995684 has defined IP address 192.168.39.109 and MAC address 52:54:00:11:f0:e4 in network mk-multinode-995684
	I0115 03:32:54.792897   40060 main.go:141] libmachine: (multinode-995684) Calling .GetSSHPort
	I0115 03:32:54.793058   40060 main.go:141] libmachine: (multinode-995684) Calling .GetSSHKeyPath
	I0115 03:32:54.793219   40060 main.go:141] libmachine: (multinode-995684) Calling .GetSSHUsername
	I0115 03:32:54.793327   40060 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/multinode-995684/id_rsa Username:docker}
	I0115 03:32:54.882113   40060 ssh_runner.go:195] Run: systemctl --version
	I0115 03:32:54.887793   40060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:32:54.899903   40060 kubeconfig.go:125] found "multinode-995684" server: "https://192.168.39.109:8443"
	I0115 03:32:54.899924   40060 api_server.go:166] Checking apiserver status ...
	I0115 03:32:54.899948   40060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 03:32:54.910990   40060 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1118/cgroup
	I0115 03:32:54.919077   40060 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod252b3e2872172be26ab6505db2d9b9ea/8bd70c624d60e1833874df908fecbb072907d1f9a1ad3cdcdd646c3b536dcf53"
	I0115 03:32:54.919142   40060 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod252b3e2872172be26ab6505db2d9b9ea/8bd70c624d60e1833874df908fecbb072907d1f9a1ad3cdcdd646c3b536dcf53/freezer.state
	I0115 03:32:54.928259   40060 api_server.go:204] freezer state: "THAWED"
	I0115 03:32:54.928272   40060 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0115 03:32:54.933333   40060 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I0115 03:32:54.933355   40060 status.go:424] multinode-995684 apiserver status = Running (err=<nil>)
	I0115 03:32:54.933365   40060 status.go:257] multinode-995684 status: &{Name:multinode-995684 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:32:54.933397   40060 status.go:255] checking status of multinode-995684-m02 ...
	I0115 03:32:54.933670   40060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:32:54.933703   40060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:32:54.948033   40060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0115 03:32:54.948374   40060 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:32:54.950019   40060 main.go:141] libmachine: Using API Version  1
	I0115 03:32:54.950059   40060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:32:54.950401   40060 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:32:54.950546   40060 main.go:141] libmachine: (multinode-995684-m02) Calling .GetState
	I0115 03:32:54.951967   40060 status.go:330] multinode-995684-m02 host status = "Running" (err=<nil>)
	I0115 03:32:54.951988   40060 host.go:66] Checking if "multinode-995684-m02" exists ...
	I0115 03:32:54.952242   40060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:32:54.952274   40060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:32:54.966572   40060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0115 03:32:54.966874   40060 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:32:54.967252   40060 main.go:141] libmachine: Using API Version  1
	I0115 03:32:54.967271   40060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:32:54.967576   40060 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:32:54.967738   40060 main.go:141] libmachine: (multinode-995684-m02) Calling .GetIP
	I0115 03:32:54.970177   40060 main.go:141] libmachine: (multinode-995684-m02) DBG | domain multinode-995684-m02 has defined MAC address 52:54:00:a8:ab:ae in network mk-multinode-995684
	I0115 03:32:54.970513   40060 main.go:141] libmachine: (multinode-995684-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ab:ae", ip: ""} in network mk-multinode-995684: {Iface:virbr1 ExpiryTime:2024-01-15 04:31:26 +0000 UTC Type:0 Mac:52:54:00:a8:ab:ae Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-995684-m02 Clientid:01:52:54:00:a8:ab:ae}
	I0115 03:32:54.970544   40060 main.go:141] libmachine: (multinode-995684-m02) DBG | domain multinode-995684-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:a8:ab:ae in network mk-multinode-995684
	I0115 03:32:54.970657   40060 host.go:66] Checking if "multinode-995684-m02" exists ...
	I0115 03:32:54.970912   40060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:32:54.970942   40060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:32:54.984231   40060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40843
	I0115 03:32:54.984557   40060 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:32:54.984934   40060 main.go:141] libmachine: Using API Version  1
	I0115 03:32:54.984960   40060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:32:54.985246   40060 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:32:54.985423   40060 main.go:141] libmachine: (multinode-995684-m02) Calling .DriverName
	I0115 03:32:54.985612   40060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 03:32:54.985634   40060 main.go:141] libmachine: (multinode-995684-m02) Calling .GetSSHHostname
	I0115 03:32:54.988023   40060 main.go:141] libmachine: (multinode-995684-m02) DBG | domain multinode-995684-m02 has defined MAC address 52:54:00:a8:ab:ae in network mk-multinode-995684
	I0115 03:32:54.988365   40060 main.go:141] libmachine: (multinode-995684-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ab:ae", ip: ""} in network mk-multinode-995684: {Iface:virbr1 ExpiryTime:2024-01-15 04:31:26 +0000 UTC Type:0 Mac:52:54:00:a8:ab:ae Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-995684-m02 Clientid:01:52:54:00:a8:ab:ae}
	I0115 03:32:54.988391   40060 main.go:141] libmachine: (multinode-995684-m02) DBG | domain multinode-995684-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:a8:ab:ae in network mk-multinode-995684
	I0115 03:32:54.988506   40060 main.go:141] libmachine: (multinode-995684-m02) Calling .GetSSHPort
	I0115 03:32:54.988644   40060 main.go:141] libmachine: (multinode-995684-m02) Calling .GetSSHKeyPath
	I0115 03:32:54.988782   40060 main.go:141] libmachine: (multinode-995684-m02) Calling .GetSSHUsername
	I0115 03:32:54.988890   40060 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17909-7685/.minikube/machines/multinode-995684-m02/id_rsa Username:docker}
	I0115 03:32:55.078132   40060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 03:32:55.090682   40060 status.go:257] multinode-995684-m02 status: &{Name:multinode-995684-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:32:55.090714   40060 status.go:255] checking status of multinode-995684-m03 ...
	I0115 03:32:55.091047   40060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:32:55.091084   40060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:32:55.105220   40060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36219
	I0115 03:32:55.105593   40060 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:32:55.106041   40060 main.go:141] libmachine: Using API Version  1
	I0115 03:32:55.106059   40060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:32:55.106405   40060 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:32:55.106572   40060 main.go:141] libmachine: (multinode-995684-m03) Calling .GetState
	I0115 03:32:55.107955   40060 status.go:330] multinode-995684-m03 host status = "Stopped" (err=<nil>)
	I0115 03:32:55.107966   40060 status.go:343] host is not running, skipping remaining checks
	I0115 03:32:55.107970   40060 status.go:257] multinode-995684-m03 status: &{Name:multinode-995684-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-995684 node start m03 -v=7 --alsologtostderr: (28.969602551s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (304.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-995684
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-995684
E0115 03:34:22.060206   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:34:49.745233   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:34:53.534949   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 03:35:37.589755   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-995684: (3m5.030905257s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-995684 --wait=true -v=8 --alsologtostderr
E0115 03:37:34.543550   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-995684 --wait=true -v=8 --alsologtostderr: (1m59.836573913s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-995684
--- PASS: TestMultiNode/serial/RestartKeepsNodes (304.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-995684 node delete m03: (1.66095789s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 stop
E0115 03:39:22.059541   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:39:53.535101   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-995684 stop: (3m3.68249393s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-995684 status: exit status 7 (97.435542ms)

                                                
                                                
-- stdout --
	multinode-995684
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-995684-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-995684 status --alsologtostderr: exit status 7 (92.188811ms)

                                                
                                                
-- stdout --
	multinode-995684
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-995684-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:41:35.709597   42174 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:41:35.709737   42174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:41:35.709752   42174 out.go:309] Setting ErrFile to fd 2...
	I0115 03:41:35.709760   42174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:41:35.709983   42174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:41:35.710165   42174 out.go:303] Setting JSON to false
	I0115 03:41:35.710204   42174 mustload.go:65] Loading cluster: multinode-995684
	I0115 03:41:35.710286   42174 notify.go:220] Checking for updates...
	I0115 03:41:35.710723   42174 config.go:182] Loaded profile config "multinode-995684": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:41:35.710740   42174 status.go:255] checking status of multinode-995684 ...
	I0115 03:41:35.711310   42174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:41:35.711352   42174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:41:35.727537   42174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I0115 03:41:35.727950   42174 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:41:35.728534   42174 main.go:141] libmachine: Using API Version  1
	I0115 03:41:35.728562   42174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:41:35.728914   42174 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:41:35.729092   42174 main.go:141] libmachine: (multinode-995684) Calling .GetState
	I0115 03:41:35.730620   42174 status.go:330] multinode-995684 host status = "Stopped" (err=<nil>)
	I0115 03:41:35.730633   42174 status.go:343] host is not running, skipping remaining checks
	I0115 03:41:35.730638   42174 status.go:257] multinode-995684 status: &{Name:multinode-995684 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 03:41:35.730654   42174 status.go:255] checking status of multinode-995684-m02 ...
	I0115 03:41:35.730950   42174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 03:41:35.730984   42174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 03:41:35.745327   42174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46143
	I0115 03:41:35.745727   42174 main.go:141] libmachine: () Calling .GetVersion
	I0115 03:41:35.746146   42174 main.go:141] libmachine: Using API Version  1
	I0115 03:41:35.746168   42174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 03:41:35.746443   42174 main.go:141] libmachine: () Calling .GetMachineName
	I0115 03:41:35.746639   42174 main.go:141] libmachine: (multinode-995684-m02) Calling .GetState
	I0115 03:41:35.747989   42174 status.go:330] multinode-995684-m02 host status = "Stopped" (err=<nil>)
	I0115 03:41:35.748002   42174 status.go:343] host is not running, skipping remaining checks
	I0115 03:41:35.748019   42174 status.go:257] multinode-995684-m02 status: &{Name:multinode-995684-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (114.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-995684 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0115 03:42:34.543879   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-995684 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m54.193687777s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-995684 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (114.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-995684
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-995684-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-995684-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (72.777669ms)

                                                
                                                
-- stdout --
	* [multinode-995684-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-995684-m02' is duplicated with machine name 'multinode-995684-m02' in profile 'multinode-995684'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-995684-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-995684-m03 --driver=kvm2  --container-runtime=containerd: (47.64929955s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-995684
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-995684: exit status 80 (223.440846ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-995684 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-995684-m03 already exists in multinode-995684-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-995684-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.96s)

                                                
                                    
x
+
TestPreload (297.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-912091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0115 03:44:22.059334   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:44:36.581092   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 03:44:53.534791   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 03:45:45.105474   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-912091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m8.847563921s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-912091 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-912091 image pull gcr.io/k8s-minikube/busybox: (3.217627314s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-912091
E0115 03:47:34.543661   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-912091: (1m31.833107544s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-912091 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-912091 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m12.381124372s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-912091 image list
helpers_test.go:175: Cleaning up "test-preload-912091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-912091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-912091: (1.074538754s)
--- PASS: TestPreload (297.58s)

                                                
                                    
x
+
TestScheduledStopUnix (118.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-849513 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0115 03:49:22.060083   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:49:53.534271   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-849513 --memory=2048 --driver=kvm2  --container-runtime=containerd: (46.317530194s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849513 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-849513 -n scheduled-stop-849513
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849513 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849513 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849513 -n scheduled-stop-849513
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-849513
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849513 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-849513
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-849513: exit status 7 (73.921895ms)

                                                
                                                
-- stdout --
	scheduled-stop-849513
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849513 -n scheduled-stop-849513
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849513 -n scheduled-stop-849513: exit status 7 (86.049422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-849513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-849513
--- PASS: TestScheduledStopUnix (118.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (235.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.650938030 start -p running-upgrade-565552 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0115 03:54:22.059522   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 03:54:53.534234   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.650938030 start -p running-upgrade-565552 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m38.539995568s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-565552 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-565552 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m12.008200697s)
helpers_test.go:175: Cleaning up "running-upgrade-565552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-565552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-565552: (1.269383912s)
--- PASS: TestRunningBinaryUpgrade (235.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (205.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-496643 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-496643 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m41.299633233s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-496643
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-496643: (2.244504653s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-496643 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-496643 status --format={{.Host}}: exit status 7 (72.073301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-496643 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-496643 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m13.286772856s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-496643 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-496643 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-496643 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (218.019692ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-496643] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-496643
	    minikube start -p kubernetes-upgrade-496643 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4966432 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-496643 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-496643 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-496643 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (26.782332057s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-496643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-496643
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-496643: (1.140577459s)
--- PASS: TestKubernetesUpgrade (205.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.73s)

                                                
                                    
x
+
TestPause/serial/Start (72.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-899108 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-899108 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m12.729624848s)
--- PASS: TestPause/serial/Start (72.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (242.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3879519673 start -p stopped-upgrade-920043 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0115 03:52:17.590397   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3879519673 start -p stopped-upgrade-920043 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m3.674953205s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3879519673 -p stopped-upgrade-920043 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3879519673 -p stopped-upgrade-920043 stop: (12.162177628s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-920043 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-920043 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m46.38904763s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (242.23s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-899108 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0115 03:52:34.543721   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-899108 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m9.949376989s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (69.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275948 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-275948 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (71.841682ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-275948] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (52.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275948 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-275948 --driver=kvm2  --container-runtime=containerd: (51.759424861s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-275948 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (52.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-754887 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-754887 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (139.560712ms)

                                                
                                                
-- stdout --
	* [false-754887] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 03:53:26.919066   47740 out.go:296] Setting OutFile to fd 1 ...
	I0115 03:53:26.919292   47740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:53:26.919306   47740 out.go:309] Setting ErrFile to fd 2...
	I0115 03:53:26.919314   47740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 03:53:26.919614   47740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17909-7685/.minikube/bin
	I0115 03:53:26.920399   47740 out.go:303] Setting JSON to false
	I0115 03:53:26.921645   47740 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5752,"bootTime":1705285055,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 03:53:26.921727   47740 start.go:138] virtualization: kvm guest
	I0115 03:53:26.924481   47740 out.go:177] * [false-754887] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 03:53:26.926630   47740 out.go:177]   - MINIKUBE_LOCATION=17909
	I0115 03:53:26.926669   47740 notify.go:220] Checking for updates...
	I0115 03:53:26.928439   47740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 03:53:26.929815   47740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17909-7685/kubeconfig
	I0115 03:53:26.931266   47740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17909-7685/.minikube
	I0115 03:53:26.932628   47740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 03:53:26.934396   47740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 03:53:26.936793   47740 config.go:182] Loaded profile config "NoKubernetes-275948": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:53:26.936921   47740 config.go:182] Loaded profile config "pause-899108": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 03:53:26.936996   47740 config.go:182] Loaded profile config "stopped-upgrade-920043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0115 03:53:26.937076   47740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 03:53:26.983898   47740 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 03:53:26.985380   47740 start.go:296] selected driver: kvm2
	I0115 03:53:26.985392   47740 start.go:900] validating driver "kvm2" against <nil>
	I0115 03:53:26.985402   47740 start.go:911] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 03:53:26.987759   47740 out.go:177] 
	W0115 03:53:26.989180   47740 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0115 03:53:26.990506   47740 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-754887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-754887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 03:52:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.186:8443
name: pause-899108
contexts:
- context:
cluster: pause-899108
extensions:
- extension:
last-update: Mon, 15 Jan 2024 03:52:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-899108
name: pause-899108
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-899108
user:
client-certificate: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/pause-899108/client.crt
client-key: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/pause-899108/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-754887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754887"

                                                
                                                
----------------------- debugLogs end: false-754887 [took: 3.542860493s] --------------------------------
helpers_test.go:175: Cleaning up "false-754887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-754887
--- PASS: TestNetworkPlugins/group/false (3.86s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-899108 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-899108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-899108 --output=json --layout=cluster: exit status 2 (285.67572ms)

                                                
                                                
-- stdout --
	{"Name":"pause-899108","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-899108","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-899108 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-899108 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-899108 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.77s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.769650901s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (76.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275948 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-275948 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m15.256959439s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-275948 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-275948 status -o json: exit status 2 (239.843448ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-275948","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-275948
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-275948: (1.003370071s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (76.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275948 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-275948 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (31.977319094s)
--- PASS: TestNoKubernetes/serial/Start (31.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-920043
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-920043: (1.156213879s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-275948 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-275948 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.022913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-275948
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-275948: (1.34524701s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (68.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275948 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-275948 --driver=kvm2  --container-runtime=containerd: (1m8.094462435s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (68.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-275948 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-275948 "sudo systemctl is-active --quiet service kubelet": exit status 1 (220.437334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m41.804445395s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m23.769361175s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (133.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m13.982626809s)
--- PASS: TestNetworkPlugins/group/calico/Start (133.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lnl62" [1d26ca68-d4e9-4914-9e9f-8c3a157b97c0] Running
E0115 03:59:22.059279   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007191933s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-754887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-754887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n62ft" [5b08c4d9-1f79-4203-a0d5-eea4e48881fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n62ft" [5b08c4d9-1f79-4203-a0d5-eea4e48881fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005835557s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-754887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-754887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wf6tl" [83657c80-3667-4b12-a770-b483342784f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wf6tl" [83657c80-3667-4b12-a770-b483342784f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004853608s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-754887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-754887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m29.439397568s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0115 03:59:53.534874   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m41.328812527s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (134.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m14.740009289s)
--- PASS: TestNetworkPlugins/group/flannel/Start (134.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vvdfj" [d4daadd2-a5fb-44b4-8a60-5f3a5c8fa2e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007338262s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-754887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-754887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-95k4h" [826b1fa7-ff1e-44e8-9e83-663f95ed6b6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-95k4h" [826b1fa7-ff1e-44e8-9e83-663f95ed6b6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00432527s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-754887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0115 04:01:16.581876   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-754887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m41.961055063s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-754887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-754887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lwwbp" [cbe5052e-f148-4400-a589-62c7e1ffe7bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lwwbp" [cbe5052e-f148-4400-a589-62c7e1ffe7bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004086488s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-754887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-754887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-754887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bnmzl" [72f39944-fd15-48cf-8a94-97a603acc81d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bnmzl" [72f39944-fd15-48cf-8a94-97a603acc81d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005373963s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-754887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (138.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-416977 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-416977 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m18.83859829s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (138.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (211.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-891153 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-891153 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (3m31.994236332s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (211.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gqq8j" [40294fd2-6fa9-4a5a-bc44-fbe9383d0450] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005079643s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-754887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-754887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l2cf9" [d45c4cf4-2734-49b1-99f2-23aff2e538f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-l2cf9" [d45c4cf4-2734-49b1-99f2-23aff2e538f2] Running
E0115 04:02:25.106480   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005656677s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-754887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-754887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-754887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bfqk9" [8c56bcc0-5395-4624-ae13-c4e092da9a4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bfqk9" [8c56bcc0-5395-4624-ae13-c4e092da9a4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004506135s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (114.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-214175 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-214175 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m54.599206991s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (114.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-754887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-754887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0115 04:11:21.768660   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-879781 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-879781 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m16.58727763s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-416977 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ba225f0a-bd52-4b45-8a04-3868f61d97ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ba225f0a-bd52-4b45-8a04-3868f61d97ce] Running
E0115 04:04:17.481660   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:17.486921   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:17.497178   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:17.517450   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:17.557678   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:17.637973   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:17.799001   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:18.119534   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:18.760137   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005050696s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-416977 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-416977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0115 04:04:20.040682   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-416977 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-416977 --alsologtostderr -v=3
E0115 04:04:22.059274   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 04:04:22.601462   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:22.741187   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:22.746477   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:22.756723   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:22.776976   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:22.817210   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:22.897495   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:23.058349   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:23.379064   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:24.019854   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:25.300958   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:04:27.721869   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:04:27.861973   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-416977 --alsologtostderr -v=3: (1m32.414816758s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-879781 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d771c410-6b2d-49d0-81b7-57b7f8a6231c] Pending
helpers_test.go:344: "busybox" [d771c410-6b2d-49d0-81b7-57b7f8a6231c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0115 04:04:32.982592   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d771c410-6b2d-49d0-81b7-57b7f8a6231c] Running
E0115 04:04:37.962088   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.007135661s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-879781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-879781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-879781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.157201696s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-879781 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-214175 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d7ff840a-b8ae-441d-bea4-e7f01e115a66] Pending
helpers_test.go:344: "busybox" [d7ff840a-b8ae-441d-bea4-e7f01e115a66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0115 04:04:43.223336   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d7ff840a-b8ae-441d-bea4-e7f01e115a66] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004458505s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-214175 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-879781 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-879781 --alsologtostderr -v=3: (1m32.054823576s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-214175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-214175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044988352s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-214175 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-214175 --alsologtostderr -v=3
E0115 04:04:53.533887   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 04:04:58.442437   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:05:03.703871   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:05:21.486369   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:21.491618   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:21.501828   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:21.522058   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:21.562283   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:21.643242   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:21.803404   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:22.123977   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:22.764112   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:24.044994   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:26.605506   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:31.726648   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-214175 --alsologtostderr -v=3: (1m32.626595822s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-891153 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [de22ebf7-ac5e-476d-bdfb-1149b75a783e] Pending
helpers_test.go:344: "busybox" [de22ebf7-ac5e-476d-bdfb-1149b75a783e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0115 04:05:39.403470   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
helpers_test.go:344: "busybox" [de22ebf7-ac5e-476d-bdfb-1149b75a783e] Running
E0115 04:05:41.967603   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:05:44.664098   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004337807s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-891153 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-891153 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-891153 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-891153 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-891153 --alsologtostderr -v=3: (1m32.066736147s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-416977 -n old-k8s-version-416977
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-416977 -n old-k8s-version-416977: exit status 7 (72.099723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-416977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (107.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-416977 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0115 04:06:02.448163   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-416977 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m47.370768654s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-416977 -n old-k8s-version-416977
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (107.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781: exit status 7 (81.042561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-879781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (322.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-879781 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 04:06:21.768520   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:21.773810   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:21.784081   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:21.804392   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:21.844720   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:21.925096   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:22.085740   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:22.406377   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:23.046712   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:24.327759   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-879781 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m22.304508015s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781
E0115 04:11:34.849421   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (322.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214175 -n embed-certs-214175
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214175 -n embed-certs-214175: exit status 7 (96.603661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-214175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (405.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-214175 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 04:06:26.888979   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:32.010042   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:34.849442   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:34.854701   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:34.865380   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:34.885506   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:34.925774   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:35.006061   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:35.166843   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:35.487686   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:36.128361   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:37.409392   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:39.969962   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:42.250329   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:06:43.408396   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:06:45.090710   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:06:55.331794   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:07:01.324168   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:07:02.731430   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:07:06.584745   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:07:10.040298   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:10.045576   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:10.055808   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:10.076133   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:10.116393   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:10.197284   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:10.357743   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:10.678309   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:11.318897   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:12.599216   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:15.160105   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:15.812296   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-214175 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (6m45.259770222s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214175 -n embed-certs-214175
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (405.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891153 -n no-preload-891153
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891153 -n no-preload-891153: exit status 7 (83.959471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-891153 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-891153 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 04:07:20.280270   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:30.520425   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:34.543070   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-891153 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m0.406826304s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891153 -n no-preload-891153
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-xkljr" [2f26d990-2055-46df-a565-ab24827400eb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0115 04:07:42.614515   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:42.619798   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:42.630073   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:42.650486   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:42.691163   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:42.771740   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:42.932217   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-xkljr" [2f26d990-2055-46df-a565-ab24827400eb] Running
E0115 04:07:43.253327   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:43.691802   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:07:43.894210   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:45.174549   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:07:47.735477   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.005543446s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-xkljr" [2f26d990-2055-46df-a565-ab24827400eb] Running
E0115 04:07:51.001515   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:07:52.856679   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004574877s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-416977 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-416977 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-416977 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-416977 -n old-k8s-version-416977
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-416977 -n old-k8s-version-416977: exit status 2 (254.642863ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-416977 -n old-k8s-version-416977
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-416977 -n old-k8s-version-416977: exit status 2 (274.348105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-416977 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-416977 -n old-k8s-version-416977
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-416977 -n old-k8s-version-416977
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-373329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 04:08:03.097508   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:08:05.329488   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:08:23.578417   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:08:31.962573   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:08:57.590578   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/functional-195136/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-373329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m0.080206292s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-373329 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-373329 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.417516586s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (92.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-373329 --alsologtostderr -v=3
E0115 04:09:04.539050   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
E0115 04:09:05.612208   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
E0115 04:09:10.677426   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:10.682684   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:10.692931   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:10.713188   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:10.753491   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:10.833915   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:10.994336   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:11.315172   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:11.955975   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:13.236837   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:15.797996   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:17.481977   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:09:18.692948   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/enable-default-cni-754887/client.crt: no such file or directory
E0115 04:09:20.918600   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:22.059887   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/ingress-addon-legacy-385885/client.crt: no such file or directory
E0115 04:09:22.740522   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:09:31.158793   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:45.164354   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/kindnet-754887/client.crt: no such file or directory
E0115 04:09:50.425064   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/auto-754887/client.crt: no such file or directory
E0115 04:09:51.639851   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:09:53.533884   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/addons-974059/client.crt: no such file or directory
E0115 04:09:53.882778   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/flannel-754887/client.crt: no such file or directory
E0115 04:10:21.486539   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
E0115 04:10:26.459485   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-373329 --alsologtostderr -v=3: (1m32.295633396s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (92.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-373329 -n newest-cni-373329
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-373329 -n newest-cni-373329: exit status 7 (86.725378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-373329 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-373329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 04:10:32.600812   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
E0115 04:10:49.170369   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/calico-754887/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-373329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (35.84411461s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-373329 -n newest-cni-373329
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-373329 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-373329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-373329 -n newest-cni-373329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-373329 -n newest-cni-373329: exit status 2 (287.116866ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-373329 -n newest-cni-373329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-373329 -n newest-cni-373329: exit status 2 (277.321342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-373329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-373329 -n newest-cni-373329
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-373329 -n newest-cni-373329
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7c25k" [934d2bcf-675d-46de-a4e4-bfea2282624a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7c25k" [934d2bcf-675d-46de-a4e4-bfea2282624a] Running
E0115 04:11:49.453411   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/custom-flannel-754887/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.005752386s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7c25k" [934d2bcf-675d-46de-a4e4-bfea2282624a] Running
E0115 04:11:54.521461   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/old-k8s-version-416977/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004796739s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-879781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-879781 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-879781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781: exit status 2 (249.236488ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781: exit status 2 (260.240738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-879781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-879781 -n default-k8s-diff-port-879781
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jl68k" [3798bdb8-7a4b-488d-8661-9b5ea3e859e8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004720028s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jl68k" [3798bdb8-7a4b-488d-8661-9b5ea3e859e8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004909735s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-891153 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-891153 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-891153 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891153 -n no-preload-891153
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891153 -n no-preload-891153: exit status 2 (241.992297ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-891153 -n no-preload-891153
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-891153 -n no-preload-891153: exit status 2 (243.263778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-891153 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891153 -n no-preload-891153
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-891153 -n no-preload-891153
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4wzsq" [e8847890-7d67-4549-9a73-11b8584a7972] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0115 04:13:10.299716   14954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/bridge-754887/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4wzsq" [e8847890-7d67-4549-9a73-11b8584a7972] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004461598s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4wzsq" [e8847890-7d67-4549-9a73-11b8584a7972] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004256948s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-214175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-214175 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-214175 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-214175 -n embed-certs-214175
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-214175 -n embed-certs-214175: exit status 2 (235.329076ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-214175 -n embed-certs-214175
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-214175 -n embed-certs-214175: exit status 2 (240.657442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-214175 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-214175 -n embed-certs-214175
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-214175 -n embed-certs-214175
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                    

Test skip (39/337)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
217 TestKicCustomNetwork 0
218 TestKicExistingNetwork 0
219 TestKicCustomSubnet 0
220 TestKicStaticIP 0
252 TestChangeNoneUser 0
255 TestScheduledStopWindows 0
257 TestSkaffold 0
259 TestInsufficientStorage 0
263 TestMissingContainerUpgrade 0
274 TestNetworkPlugins/group/kubenet 3.47
282 TestNetworkPlugins/group/cilium 5.35
295 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-754887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-754887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 03:52:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.186:8443
name: pause-899108
contexts:
- context:
cluster: pause-899108
extensions:
- extension:
last-update: Mon, 15 Jan 2024 03:52:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-899108
name: pause-899108
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-899108
user:
client-certificate: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/pause-899108/client.crt
client-key: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/pause-899108/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-754887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754887"

                                                
                                                
----------------------- debugLogs end: kubenet-754887 [took: 3.294011202s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-754887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-754887
--- SKIP: TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-754887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-754887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17909-7685/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 03:52:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.186:8443
name: pause-899108
contexts:
- context:
cluster: pause-899108
extensions:
- extension:
last-update: Mon, 15 Jan 2024 03:52:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-899108
name: pause-899108
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-899108
user:
client-certificate: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/pause-899108/client.crt
client-key: /home/jenkins/minikube-integration/17909-7685/.minikube/profiles/pause-899108/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-754887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-754887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754887"

                                                
                                                
----------------------- debugLogs end: cilium-754887 [took: 5.201624458s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-754887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-754887
--- SKIP: TestNetworkPlugins/group/cilium (5.35s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-744534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-744534
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard