Test Report: KVM_Linux 19347

                    
                      0e08cf035d2b49b1a7844497e1c3c2e2e59b4b36:2024-07-30:35562
                    
                

Test fail (1/349)

Order failed test Duration
168 TestMultiControlPlane/serial/DeployApp 39.18
x
+
TestMultiControlPlane/serial/DeployApp (39.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-238496 -- rollout status deployment/busybox: (5.929027416s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0729 23:19:24.228331   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:19:24.233585   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:19:24.243915   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:19:24.264232   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:19:24.304549   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:19:24.384877   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:19:24.545307   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:19:24.865594   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0729 23:19:25.505774   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0729 23:19:26.787108   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0729 23:19:29.347785   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0729 23:19:34.468040   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0729 23:19:44.708532   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-8ql68 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-8ql68 -- nslookup kubernetes.io: exit status 1 (104.452814ms)

                                                
                                                
** stderr ** 
	error: cannot exec into a container in a completed pod; current phase is Failed

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-8ql68 could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-d42qb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-ftt4w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-scl6h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-8ql68 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-8ql68 -- nslookup kubernetes.default: exit status 1 (106.003564ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "busybox-fc5497c4f-8ql68" not found

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-8ql68 could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-d42qb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-ftt4w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-scl6h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-8ql68 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-8ql68 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (101.516243ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "busybox-fc5497c4f-8ql68" not found

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-8ql68 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-d42qb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-ftt4w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-scl6h -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-238496 -n ha-238496
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-238496 logs -n 25: (1.084438141s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| delete  | -p functional-652848                 | functional-652848 | jenkins | v1.33.1 | 29 Jul 24 23:15 UTC | 29 Jul 24 23:15 UTC |
	| start   | -p ha-238496 --wait=true             | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:15 UTC | 29 Jul 24 23:19 UTC |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=kvm2                        |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- apply -f             | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- rollout status       | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- get pods -o          | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC |                     |
	|         | busybox-fc5497c4f-8ql68 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-d42qb --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-ftt4w --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-scl6h --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC |                     |
	|         | busybox-fc5497c4f-8ql68 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-d42qb --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-ftt4w --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-scl6h --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC |                     |
	|         | busybox-fc5497c4f-8ql68 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-d42qb -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-ftt4w -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-238496 -- exec                 | ha-238496         | jenkins | v1.33.1 | 29 Jul 24 23:19 UTC | 29 Jul 24 23:19 UTC |
	|         | busybox-fc5497c4f-scl6h -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 23:15:16
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 23:15:16.010336   29396 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:15:16.010608   29396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:15:16.010618   29396 out.go:304] Setting ErrFile to fd 2...
	I0729 23:15:16.010622   29396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:15:16.010886   29396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:15:16.011469   29396 out.go:298] Setting JSON to false
	I0729 23:15:16.012305   29396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3412,"bootTime":1722291504,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 23:15:16.012359   29396 start.go:139] virtualization: kvm guest
	I0729 23:15:16.014349   29396 out.go:177] * [ha-238496] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 23:15:16.015655   29396 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 23:15:16.015702   29396 notify.go:220] Checking for updates...
	I0729 23:15:16.018055   29396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 23:15:16.019280   29396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:15:16.020588   29396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:15:16.021752   29396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 23:15:16.022911   29396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 23:15:16.024125   29396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 23:15:16.058288   29396 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 23:15:16.059458   29396 start.go:297] selected driver: kvm2
	I0729 23:15:16.059470   29396 start.go:901] validating driver "kvm2" against <nil>
	I0729 23:15:16.059483   29396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 23:15:16.060204   29396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 23:15:16.060324   29396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19347-12221/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 23:15:16.075224   29396 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 23:15:16.075274   29396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 23:15:16.075538   29396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 23:15:16.075566   29396 cni.go:84] Creating CNI manager for ""
	I0729 23:15:16.075575   29396 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 23:15:16.075587   29396 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 23:15:16.075665   29396 start.go:340] cluster config:
	{Name:ha-238496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-238496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:15:16.075824   29396 iso.go:125] acquiring lock: {Name:mke1b110143262a7fb7eb5e1cbaa1784fa37fd0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 23:15:16.078247   29396 out.go:177] * Starting "ha-238496" primary control-plane node in "ha-238496" cluster
	I0729 23:15:16.079320   29396 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 23:15:16.079359   29396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 23:15:16.079369   29396 cache.go:56] Caching tarball of preloaded images
	I0729 23:15:16.079460   29396 preload.go:172] Found /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 23:15:16.079474   29396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 23:15:16.079750   29396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json ...
	I0729 23:15:16.079767   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json: {Name:mk83765509bbe48dfceafa2fa0be21d32b315310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:15:16.079890   29396 start.go:360] acquireMachinesLock for ha-238496: {Name:mk79fbc287386032c39e512567e9786663e657a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 23:15:16.079917   29396 start.go:364] duration metric: took 14.94µs to acquireMachinesLock for "ha-238496"
	I0729 23:15:16.079932   29396 start.go:93] Provisioning new machine with config: &{Name:ha-238496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-238496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 23:15:16.079984   29396 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 23:15:16.081578   29396 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 23:15:16.081687   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:15:16.081718   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:15:16.095932   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0729 23:15:16.096323   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:15:16.096896   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:15:16.096923   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:15:16.097216   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:15:16.097413   29396 main.go:141] libmachine: (ha-238496) Calling .GetMachineName
	I0729 23:15:16.097547   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:16.097777   29396 start.go:159] libmachine.API.Create for "ha-238496" (driver="kvm2")
	I0729 23:15:16.097801   29396 client.go:168] LocalClient.Create starting
	I0729 23:15:16.097825   29396 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem
	I0729 23:15:16.097853   29396 main.go:141] libmachine: Decoding PEM data...
	I0729 23:15:16.097873   29396 main.go:141] libmachine: Parsing certificate...
	I0729 23:15:16.097926   29396 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem
	I0729 23:15:16.097944   29396 main.go:141] libmachine: Decoding PEM data...
	I0729 23:15:16.097952   29396 main.go:141] libmachine: Parsing certificate...
	I0729 23:15:16.097971   29396 main.go:141] libmachine: Running pre-create checks...
	I0729 23:15:16.097981   29396 main.go:141] libmachine: (ha-238496) Calling .PreCreateCheck
	I0729 23:15:16.098305   29396 main.go:141] libmachine: (ha-238496) Calling .GetConfigRaw
	I0729 23:15:16.098614   29396 main.go:141] libmachine: Creating machine...
	I0729 23:15:16.098638   29396 main.go:141] libmachine: (ha-238496) Calling .Create
	I0729 23:15:16.098767   29396 main.go:141] libmachine: (ha-238496) Creating KVM machine...
	I0729 23:15:16.099974   29396 main.go:141] libmachine: (ha-238496) DBG | found existing default KVM network
	I0729 23:15:16.100617   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:16.100493   29419 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0729 23:15:16.100634   29396 main.go:141] libmachine: (ha-238496) DBG | created network xml: 
	I0729 23:15:16.100647   29396 main.go:141] libmachine: (ha-238496) DBG | <network>
	I0729 23:15:16.100656   29396 main.go:141] libmachine: (ha-238496) DBG |   <name>mk-ha-238496</name>
	I0729 23:15:16.100664   29396 main.go:141] libmachine: (ha-238496) DBG |   <dns enable='no'/>
	I0729 23:15:16.100674   29396 main.go:141] libmachine: (ha-238496) DBG |   
	I0729 23:15:16.100689   29396 main.go:141] libmachine: (ha-238496) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 23:15:16.100697   29396 main.go:141] libmachine: (ha-238496) DBG |     <dhcp>
	I0729 23:15:16.100704   29396 main.go:141] libmachine: (ha-238496) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 23:15:16.100716   29396 main.go:141] libmachine: (ha-238496) DBG |     </dhcp>
	I0729 23:15:16.100724   29396 main.go:141] libmachine: (ha-238496) DBG |   </ip>
	I0729 23:15:16.100732   29396 main.go:141] libmachine: (ha-238496) DBG |   
	I0729 23:15:16.100740   29396 main.go:141] libmachine: (ha-238496) DBG | </network>
	I0729 23:15:16.100750   29396 main.go:141] libmachine: (ha-238496) DBG | 
	I0729 23:15:16.105671   29396 main.go:141] libmachine: (ha-238496) DBG | trying to create private KVM network mk-ha-238496 192.168.39.0/24...
	I0729 23:15:16.170490   29396 main.go:141] libmachine: (ha-238496) Setting up store path in /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496 ...
	I0729 23:15:16.170526   29396 main.go:141] libmachine: (ha-238496) Building disk image from file:///home/jenkins/minikube-integration/19347-12221/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 23:15:16.170537   29396 main.go:141] libmachine: (ha-238496) DBG | private KVM network mk-ha-238496 192.168.39.0/24 created
	I0729 23:15:16.170555   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:16.170413   29419 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:15:16.170571   29396 main.go:141] libmachine: (ha-238496) Downloading /home/jenkins/minikube-integration/19347-12221/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19347-12221/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 23:15:16.419701   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:16.419556   29419 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa...
	I0729 23:15:16.565114   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:16.564962   29419 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/ha-238496.rawdisk...
	I0729 23:15:16.565144   29396 main.go:141] libmachine: (ha-238496) DBG | Writing magic tar header
	I0729 23:15:16.565157   29396 main.go:141] libmachine: (ha-238496) DBG | Writing SSH key tar header
	I0729 23:15:16.565167   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:16.565099   29419 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496 ...
	I0729 23:15:16.565212   29396 main.go:141] libmachine: (ha-238496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496
	I0729 23:15:16.565252   29396 main.go:141] libmachine: (ha-238496) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496 (perms=drwx------)
	I0729 23:15:16.565263   29396 main.go:141] libmachine: (ha-238496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube/machines
	I0729 23:15:16.565281   29396 main.go:141] libmachine: (ha-238496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:15:16.565293   29396 main.go:141] libmachine: (ha-238496) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube/machines (perms=drwxr-xr-x)
	I0729 23:15:16.565305   29396 main.go:141] libmachine: (ha-238496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221
	I0729 23:15:16.565313   29396 main.go:141] libmachine: (ha-238496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 23:15:16.565319   29396 main.go:141] libmachine: (ha-238496) DBG | Checking permissions on dir: /home/jenkins
	I0729 23:15:16.565329   29396 main.go:141] libmachine: (ha-238496) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube (perms=drwxr-xr-x)
	I0729 23:15:16.565334   29396 main.go:141] libmachine: (ha-238496) DBG | Checking permissions on dir: /home
	I0729 23:15:16.565341   29396 main.go:141] libmachine: (ha-238496) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221 (perms=drwxrwxr-x)
	I0729 23:15:16.565352   29396 main.go:141] libmachine: (ha-238496) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 23:15:16.565364   29396 main.go:141] libmachine: (ha-238496) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 23:15:16.565377   29396 main.go:141] libmachine: (ha-238496) Creating domain...
	I0729 23:15:16.565390   29396 main.go:141] libmachine: (ha-238496) DBG | Skipping /home - not owner
	I0729 23:15:16.566416   29396 main.go:141] libmachine: (ha-238496) define libvirt domain using xml: 
	I0729 23:15:16.566442   29396 main.go:141] libmachine: (ha-238496) <domain type='kvm'>
	I0729 23:15:16.566450   29396 main.go:141] libmachine: (ha-238496)   <name>ha-238496</name>
	I0729 23:15:16.566457   29396 main.go:141] libmachine: (ha-238496)   <memory unit='MiB'>2200</memory>
	I0729 23:15:16.566462   29396 main.go:141] libmachine: (ha-238496)   <vcpu>2</vcpu>
	I0729 23:15:16.566467   29396 main.go:141] libmachine: (ha-238496)   <features>
	I0729 23:15:16.566475   29396 main.go:141] libmachine: (ha-238496)     <acpi/>
	I0729 23:15:16.566485   29396 main.go:141] libmachine: (ha-238496)     <apic/>
	I0729 23:15:16.566491   29396 main.go:141] libmachine: (ha-238496)     <pae/>
	I0729 23:15:16.566507   29396 main.go:141] libmachine: (ha-238496)     
	I0729 23:15:16.566515   29396 main.go:141] libmachine: (ha-238496)   </features>
	I0729 23:15:16.566520   29396 main.go:141] libmachine: (ha-238496)   <cpu mode='host-passthrough'>
	I0729 23:15:16.566525   29396 main.go:141] libmachine: (ha-238496)   
	I0729 23:15:16.566530   29396 main.go:141] libmachine: (ha-238496)   </cpu>
	I0729 23:15:16.566536   29396 main.go:141] libmachine: (ha-238496)   <os>
	I0729 23:15:16.566541   29396 main.go:141] libmachine: (ha-238496)     <type>hvm</type>
	I0729 23:15:16.566548   29396 main.go:141] libmachine: (ha-238496)     <boot dev='cdrom'/>
	I0729 23:15:16.566553   29396 main.go:141] libmachine: (ha-238496)     <boot dev='hd'/>
	I0729 23:15:16.566561   29396 main.go:141] libmachine: (ha-238496)     <bootmenu enable='no'/>
	I0729 23:15:16.566565   29396 main.go:141] libmachine: (ha-238496)   </os>
	I0729 23:15:16.566572   29396 main.go:141] libmachine: (ha-238496)   <devices>
	I0729 23:15:16.566577   29396 main.go:141] libmachine: (ha-238496)     <disk type='file' device='cdrom'>
	I0729 23:15:16.566612   29396 main.go:141] libmachine: (ha-238496)       <source file='/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/boot2docker.iso'/>
	I0729 23:15:16.566636   29396 main.go:141] libmachine: (ha-238496)       <target dev='hdc' bus='scsi'/>
	I0729 23:15:16.566649   29396 main.go:141] libmachine: (ha-238496)       <readonly/>
	I0729 23:15:16.566665   29396 main.go:141] libmachine: (ha-238496)     </disk>
	I0729 23:15:16.566678   29396 main.go:141] libmachine: (ha-238496)     <disk type='file' device='disk'>
	I0729 23:15:16.566708   29396 main.go:141] libmachine: (ha-238496)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 23:15:16.566730   29396 main.go:141] libmachine: (ha-238496)       <source file='/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/ha-238496.rawdisk'/>
	I0729 23:15:16.566745   29396 main.go:141] libmachine: (ha-238496)       <target dev='hda' bus='virtio'/>
	I0729 23:15:16.566756   29396 main.go:141] libmachine: (ha-238496)     </disk>
	I0729 23:15:16.566767   29396 main.go:141] libmachine: (ha-238496)     <interface type='network'>
	I0729 23:15:16.566780   29396 main.go:141] libmachine: (ha-238496)       <source network='mk-ha-238496'/>
	I0729 23:15:16.566791   29396 main.go:141] libmachine: (ha-238496)       <model type='virtio'/>
	I0729 23:15:16.566803   29396 main.go:141] libmachine: (ha-238496)     </interface>
	I0729 23:15:16.566814   29396 main.go:141] libmachine: (ha-238496)     <interface type='network'>
	I0729 23:15:16.566835   29396 main.go:141] libmachine: (ha-238496)       <source network='default'/>
	I0729 23:15:16.566852   29396 main.go:141] libmachine: (ha-238496)       <model type='virtio'/>
	I0729 23:15:16.566865   29396 main.go:141] libmachine: (ha-238496)     </interface>
	I0729 23:15:16.566879   29396 main.go:141] libmachine: (ha-238496)     <serial type='pty'>
	I0729 23:15:16.566885   29396 main.go:141] libmachine: (ha-238496)       <target port='0'/>
	I0729 23:15:16.566890   29396 main.go:141] libmachine: (ha-238496)     </serial>
	I0729 23:15:16.566896   29396 main.go:141] libmachine: (ha-238496)     <console type='pty'>
	I0729 23:15:16.566910   29396 main.go:141] libmachine: (ha-238496)       <target type='serial' port='0'/>
	I0729 23:15:16.566919   29396 main.go:141] libmachine: (ha-238496)     </console>
	I0729 23:15:16.566923   29396 main.go:141] libmachine: (ha-238496)     <rng model='virtio'>
	I0729 23:15:16.566932   29396 main.go:141] libmachine: (ha-238496)       <backend model='random'>/dev/random</backend>
	I0729 23:15:16.566936   29396 main.go:141] libmachine: (ha-238496)     </rng>
	I0729 23:15:16.566941   29396 main.go:141] libmachine: (ha-238496)     
	I0729 23:15:16.566947   29396 main.go:141] libmachine: (ha-238496)     
	I0729 23:15:16.566973   29396 main.go:141] libmachine: (ha-238496)   </devices>
	I0729 23:15:16.566988   29396 main.go:141] libmachine: (ha-238496) </domain>
	I0729 23:15:16.566999   29396 main.go:141] libmachine: (ha-238496) 
	I0729 23:15:16.571593   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:12:68:36 in network default
	I0729 23:15:16.572154   29396 main.go:141] libmachine: (ha-238496) Ensuring networks are active...
	I0729 23:15:16.572168   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:16.572900   29396 main.go:141] libmachine: (ha-238496) Ensuring network default is active
	I0729 23:15:16.573143   29396 main.go:141] libmachine: (ha-238496) Ensuring network mk-ha-238496 is active
	I0729 23:15:16.573572   29396 main.go:141] libmachine: (ha-238496) Getting domain xml...
	I0729 23:15:16.574187   29396 main.go:141] libmachine: (ha-238496) Creating domain...
	I0729 23:15:17.757845   29396 main.go:141] libmachine: (ha-238496) Waiting to get IP...
	I0729 23:15:17.759681   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:17.760105   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:17.760131   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:17.760076   29419 retry.go:31] will retry after 209.245228ms: waiting for machine to come up
	I0729 23:15:17.970437   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:17.970829   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:17.970853   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:17.970813   29419 retry.go:31] will retry after 283.092243ms: waiting for machine to come up
	I0729 23:15:18.255348   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:18.255731   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:18.255755   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:18.255694   29419 retry.go:31] will retry after 359.08307ms: waiting for machine to come up
	I0729 23:15:18.616174   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:18.616649   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:18.616677   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:18.616605   29419 retry.go:31] will retry after 467.932022ms: waiting for machine to come up
	I0729 23:15:19.086305   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:19.086757   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:19.086782   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:19.086720   29419 retry.go:31] will retry after 530.040761ms: waiting for machine to come up
	I0729 23:15:19.618323   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:19.618874   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:19.618898   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:19.618841   29419 retry.go:31] will retry after 750.123731ms: waiting for machine to come up
	I0729 23:15:20.370740   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:20.371168   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:20.371208   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:20.371132   29419 retry.go:31] will retry after 910.01431ms: waiting for machine to come up
	I0729 23:15:21.282557   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:21.283069   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:21.283093   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:21.283007   29419 retry.go:31] will retry after 1.475852847s: waiting for machine to come up
	I0729 23:15:22.760548   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:22.760976   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:22.761002   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:22.760929   29419 retry.go:31] will retry after 1.358011717s: waiting for machine to come up
	I0729 23:15:24.120772   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:24.121105   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:24.121126   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:24.121063   29419 retry.go:31] will retry after 2.051676006s: waiting for machine to come up
	I0729 23:15:26.174118   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:26.174517   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:26.174544   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:26.174467   29419 retry.go:31] will retry after 1.794194493s: waiting for machine to come up
	I0729 23:15:27.971315   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:27.971709   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:27.971737   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:27.971664   29419 retry.go:31] will retry after 3.105101795s: waiting for machine to come up
	I0729 23:15:31.080782   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:31.081278   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:31.081309   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:31.081209   29419 retry.go:31] will retry after 2.85435641s: waiting for machine to come up
	I0729 23:15:33.936818   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:33.937191   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find current IP address of domain ha-238496 in network mk-ha-238496
	I0729 23:15:33.937214   29396 main.go:141] libmachine: (ha-238496) DBG | I0729 23:15:33.937147   29419 retry.go:31] will retry after 5.319541558s: waiting for machine to come up
	I0729 23:15:39.260319   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.260809   29396 main.go:141] libmachine: (ha-238496) Found IP for machine: 192.168.39.113
	I0729 23:15:39.260831   29396 main.go:141] libmachine: (ha-238496) Reserving static IP address...
	I0729 23:15:39.260860   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has current primary IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.261182   29396 main.go:141] libmachine: (ha-238496) DBG | unable to find host DHCP lease matching {name: "ha-238496", mac: "52:54:00:4c:48:55", ip: "192.168.39.113"} in network mk-ha-238496
	I0729 23:15:39.335455   29396 main.go:141] libmachine: (ha-238496) DBG | Getting to WaitForSSH function...
	I0729 23:15:39.335480   29396 main.go:141] libmachine: (ha-238496) Reserved static IP address: 192.168.39.113
	I0729 23:15:39.335503   29396 main.go:141] libmachine: (ha-238496) Waiting for SSH to be available...
	I0729 23:15:39.337991   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.338365   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:39.338394   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.338622   29396 main.go:141] libmachine: (ha-238496) DBG | Using SSH client type: external
	I0729 23:15:39.338667   29396 main.go:141] libmachine: (ha-238496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa (-rw-------)
	I0729 23:15:39.338712   29396 main.go:141] libmachine: (ha-238496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 23:15:39.338731   29396 main.go:141] libmachine: (ha-238496) DBG | About to run SSH command:
	I0729 23:15:39.338743   29396 main.go:141] libmachine: (ha-238496) DBG | exit 0
	I0729 23:15:39.466724   29396 main.go:141] libmachine: (ha-238496) DBG | SSH cmd err, output: <nil>: 
	I0729 23:15:39.466990   29396 main.go:141] libmachine: (ha-238496) KVM machine creation complete!
	I0729 23:15:39.467279   29396 main.go:141] libmachine: (ha-238496) Calling .GetConfigRaw
	I0729 23:15:39.467833   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:39.468014   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:39.468168   29396 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 23:15:39.468181   29396 main.go:141] libmachine: (ha-238496) Calling .GetState
	I0729 23:15:39.469401   29396 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 23:15:39.469417   29396 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 23:15:39.469425   29396 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 23:15:39.469431   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:39.471628   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.471976   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:39.472007   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.472163   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:39.472323   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:39.472474   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:39.472620   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:39.472769   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:15:39.472953   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0729 23:15:39.472964   29396 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 23:15:39.586090   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 23:15:39.586116   29396 main.go:141] libmachine: Detecting the provisioner...
	I0729 23:15:39.586125   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:39.588745   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.589128   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:39.589193   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.589246   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:39.589420   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:39.589583   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:39.589754   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:39.589909   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:15:39.590080   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0729 23:15:39.590090   29396 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 23:15:39.703946   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 23:15:39.704026   29396 main.go:141] libmachine: found compatible host: buildroot
	I0729 23:15:39.704035   29396 main.go:141] libmachine: Provisioning with buildroot...
	I0729 23:15:39.704042   29396 main.go:141] libmachine: (ha-238496) Calling .GetMachineName
	I0729 23:15:39.704269   29396 buildroot.go:166] provisioning hostname "ha-238496"
	I0729 23:15:39.704285   29396 main.go:141] libmachine: (ha-238496) Calling .GetMachineName
	I0729 23:15:39.704444   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:39.707086   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.707402   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:39.707431   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.707571   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:39.707766   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:39.707912   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:39.708018   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:39.708169   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:15:39.708332   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0729 23:15:39.708345   29396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-238496 && echo "ha-238496" | sudo tee /etc/hostname
	I0729 23:15:39.833009   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-238496
	
	I0729 23:15:39.833034   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:39.835707   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.836036   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:39.836064   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.836236   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:39.836439   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:39.836598   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:39.836747   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:39.836920   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:15:39.837122   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0729 23:15:39.837141   29396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-238496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-238496/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-238496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 23:15:39.955917   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 23:15:39.955943   29396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19347-12221/.minikube CaCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19347-12221/.minikube}
	I0729 23:15:39.955960   29396 buildroot.go:174] setting up certificates
	I0729 23:15:39.955968   29396 provision.go:84] configureAuth start
	I0729 23:15:39.955977   29396 main.go:141] libmachine: (ha-238496) Calling .GetMachineName
	I0729 23:15:39.956235   29396 main.go:141] libmachine: (ha-238496) Calling .GetIP
	I0729 23:15:39.958684   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.959026   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:39.959052   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.959163   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:39.961045   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.961447   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:39.961474   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:39.961623   29396 provision.go:143] copyHostCerts
	I0729 23:15:39.961650   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem
	I0729 23:15:39.961683   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem, removing ...
	I0729 23:15:39.961691   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem
	I0729 23:15:39.961763   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem (1078 bytes)
	I0729 23:15:39.961832   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem
	I0729 23:15:39.961850   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem, removing ...
	I0729 23:15:39.961854   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem
	I0729 23:15:39.961877   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem (1123 bytes)
	I0729 23:15:39.961914   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem
	I0729 23:15:39.961930   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem, removing ...
	I0729 23:15:39.961937   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem
	I0729 23:15:39.961957   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem (1675 bytes)
	I0729 23:15:39.962000   29396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem org=jenkins.ha-238496 san=[127.0.0.1 192.168.39.113 ha-238496 localhost minikube]
	I0729 23:15:40.158265   29396 provision.go:177] copyRemoteCerts
	I0729 23:15:40.158330   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 23:15:40.158351   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:40.160707   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:40.161020   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:40.161048   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:40.161164   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:40.161340   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:40.161493   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:40.161634   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:15:40.244831   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 23:15:40.244915   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 23:15:40.270164   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 23:15:40.270224   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 23:15:40.294569   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 23:15:40.294624   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 23:15:40.318683   29396 provision.go:87] duration metric: took 362.704805ms to configureAuth
	I0729 23:15:40.318728   29396 buildroot.go:189] setting minikube options for container-runtime
	I0729 23:15:40.318902   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:15:40.318927   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:40.319210   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:40.321636   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:40.322052   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:40.322071   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:40.322247   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:40.322438   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:40.322648   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:40.322792   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:40.322945   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:15:40.323111   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0729 23:15:40.323121   29396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 23:15:40.436017   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 23:15:40.436041   29396 buildroot.go:70] root file system type: tmpfs
	I0729 23:15:40.436132   29396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 23:15:40.436149   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:40.438671   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:40.439024   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:40.439051   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:40.439247   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:40.439419   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:40.439554   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:40.439683   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:40.439821   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:15:40.439970   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0729 23:15:40.440026   29396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 23:15:40.565241   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 23:15:40.565269   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:40.567788   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:40.568077   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:40.568099   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:40.568318   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:40.568530   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:40.568695   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:40.568834   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:40.569029   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:15:40.569212   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0729 23:15:40.569235   29396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 23:15:42.377799   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 23:15:42.377830   29396 main.go:141] libmachine: Checking connection to Docker...
	I0729 23:15:42.377841   29396 main.go:141] libmachine: (ha-238496) Calling .GetURL
	I0729 23:15:42.379169   29396 main.go:141] libmachine: (ha-238496) DBG | Using libvirt version 6000000
	I0729 23:15:42.381260   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.381560   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:42.381579   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.381732   29396 main.go:141] libmachine: Docker is up and running!
	I0729 23:15:42.381747   29396 main.go:141] libmachine: Reticulating splines...
	I0729 23:15:42.381755   29396 client.go:171] duration metric: took 26.283947034s to LocalClient.Create
	I0729 23:15:42.381778   29396 start.go:167] duration metric: took 26.284000692s to libmachine.API.Create "ha-238496"
	I0729 23:15:42.381791   29396 start.go:293] postStartSetup for "ha-238496" (driver="kvm2")
	I0729 23:15:42.381803   29396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 23:15:42.381827   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:42.382062   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 23:15:42.382082   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:42.384054   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.384328   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:42.384353   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.384454   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:42.384623   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:42.384755   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:42.384870   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:15:42.469528   29396 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 23:15:42.473609   29396 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 23:15:42.473629   29396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-12221/.minikube/addons for local assets ...
	I0729 23:15:42.473689   29396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-12221/.minikube/files for local assets ...
	I0729 23:15:42.473790   29396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> 194112.pem in /etc/ssl/certs
	I0729 23:15:42.473805   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> /etc/ssl/certs/194112.pem
	I0729 23:15:42.473926   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 23:15:42.483711   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem --> /etc/ssl/certs/194112.pem (1708 bytes)
	I0729 23:15:42.506318   29396 start.go:296] duration metric: took 124.51393ms for postStartSetup
	I0729 23:15:42.506367   29396 main.go:141] libmachine: (ha-238496) Calling .GetConfigRaw
	I0729 23:15:42.506923   29396 main.go:141] libmachine: (ha-238496) Calling .GetIP
	I0729 23:15:42.509333   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.509632   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:42.509667   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.509802   29396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json ...
	I0729 23:15:42.509962   29396 start.go:128] duration metric: took 26.429970449s to createHost
	I0729 23:15:42.509982   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:42.511816   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.512118   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:42.512140   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.512277   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:42.512422   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:42.512578   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:42.512697   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:42.512842   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:15:42.513014   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0729 23:15:42.513033   29396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 23:15:42.623228   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722294942.603042995
	
	I0729 23:15:42.623249   29396 fix.go:216] guest clock: 1722294942.603042995
	I0729 23:15:42.623257   29396 fix.go:229] Guest: 2024-07-29 23:15:42.603042995 +0000 UTC Remote: 2024-07-29 23:15:42.509972352 +0000 UTC m=+26.531920231 (delta=93.070643ms)
	I0729 23:15:42.623290   29396 fix.go:200] guest clock delta is within tolerance: 93.070643ms
	I0729 23:15:42.623295   29396 start.go:83] releasing machines lock for "ha-238496", held for 26.543370229s
	I0729 23:15:42.623314   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:42.623578   29396 main.go:141] libmachine: (ha-238496) Calling .GetIP
	I0729 23:15:42.625968   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.626314   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:42.626342   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.626428   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:42.626934   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:42.627089   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:15:42.627166   29396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 23:15:42.627222   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:42.627317   29396 ssh_runner.go:195] Run: cat /version.json
	I0729 23:15:42.627365   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:15:42.629771   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.630108   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:42.630134   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.630156   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.630291   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:42.630453   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:42.630511   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:42.630532   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:42.630615   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:42.630681   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:15:42.630757   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:15:42.630826   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:15:42.630932   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:15:42.631082   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:15:42.730752   29396 ssh_runner.go:195] Run: systemctl --version
	I0729 23:15:42.736524   29396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 23:15:42.742007   29396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 23:15:42.742057   29396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 23:15:42.759615   29396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 23:15:42.759637   29396 start.go:495] detecting cgroup driver to use...
	I0729 23:15:42.759746   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 23:15:42.777633   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0729 23:15:42.788127   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 23:15:42.798493   29396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 23:15:42.798537   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 23:15:42.808963   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 23:15:42.819514   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 23:15:42.829904   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 23:15:42.840528   29396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 23:15:42.850970   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 23:15:42.861324   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 23:15:42.871772   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 23:15:42.882086   29396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 23:15:42.891615   29396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 23:15:42.901032   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:15:43.009973   29396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 23:15:43.033091   29396 start.go:495] detecting cgroup driver to use...
	I0729 23:15:43.033181   29396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 23:15:43.048969   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 23:15:43.062883   29396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 23:15:43.078870   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 23:15:43.091552   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 23:15:43.104157   29396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 23:15:43.136301   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 23:15:43.149567   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 23:15:43.168655   29396 ssh_runner.go:195] Run: which cri-dockerd
	I0729 23:15:43.172499   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 23:15:43.182004   29396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 23:15:43.199138   29396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 23:15:43.325878   29396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 23:15:43.453009   29396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 23:15:43.453127   29396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 23:15:43.470991   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:15:43.581853   29396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 23:15:45.932928   29396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.351034324s)
	I0729 23:15:45.933006   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 23:15:45.946192   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 23:15:45.959363   29396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 23:15:46.071000   29396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 23:15:46.191364   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:15:46.323471   29396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 23:15:46.341239   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 23:15:46.354892   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:15:46.471610   29396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 23:15:46.552045   29396 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 23:15:46.552118   29396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 23:15:46.557982   29396 start.go:563] Will wait 60s for crictl version
	I0729 23:15:46.558053   29396 ssh_runner.go:195] Run: which crictl
	I0729 23:15:46.562207   29396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 23:15:46.599562   29396 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0729 23:15:46.599682   29396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 23:15:46.627294   29396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 23:15:46.653596   29396 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0729 23:15:46.653641   29396 main.go:141] libmachine: (ha-238496) Calling .GetIP
	I0729 23:15:46.655975   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:46.656328   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:15:46.656356   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:15:46.656605   29396 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 23:15:46.660789   29396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 23:15:46.674267   29396 kubeadm.go:883] updating cluster {Name:ha-238496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 23:15:46.674374   29396 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 23:15:46.674414   29396 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 23:15:46.691432   29396 docker.go:685] Got preloaded images: 
	I0729 23:15:46.691458   29396 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0729 23:15:46.691514   29396 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 23:15:46.701601   29396 ssh_runner.go:195] Run: which lz4
	I0729 23:15:46.705691   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 23:15:46.705795   29396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 23:15:46.709989   29396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 23:15:46.710023   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0729 23:15:48.030772   29396 docker.go:649] duration metric: took 1.325008008s to copy over tarball
	I0729 23:15:48.030851   29396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 23:15:49.909865   29396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.878989077s)
	I0729 23:15:49.909896   29396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 23:15:49.946556   29396 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 23:15:49.958041   29396 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0729 23:15:49.976704   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:15:50.094931   29396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 23:15:53.045588   29396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.950621533s)
	I0729 23:15:53.045678   29396 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 23:15:53.065699   29396 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 23:15:53.065726   29396 cache_images.go:84] Images are preloaded, skipping loading
	I0729 23:15:53.065753   29396 kubeadm.go:934] updating node { 192.168.39.113 8443 v1.30.3 docker true true} ...
	I0729 23:15:53.065875   29396 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-238496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 23:15:53.065948   29396 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 23:15:53.124463   29396 cni.go:84] Creating CNI manager for ""
	I0729 23:15:53.124487   29396 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 23:15:53.124500   29396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 23:15:53.124531   29396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.113 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-238496 NodeName:ha-238496 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 23:15:53.124724   29396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-238496"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 23:15:53.124755   29396 kube-vip.go:115] generating kube-vip config ...
	I0729 23:15:53.124798   29396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 23:15:53.140148   29396 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 23:15:53.140256   29396 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 23:15:53.140321   29396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 23:15:53.150652   29396 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 23:15:53.150754   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 23:15:53.160855   29396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0729 23:15:53.178094   29396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 23:15:53.195033   29396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 23:15:53.211624   29396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 23:15:53.227725   29396 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 23:15:53.231464   29396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 23:15:53.243680   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:15:53.359728   29396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 23:15:53.378242   29396 certs.go:68] Setting up /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496 for IP: 192.168.39.113
	I0729 23:15:53.378263   29396 certs.go:194] generating shared ca certs ...
	I0729 23:15:53.378278   29396 certs.go:226] acquiring lock for ca certs: {Name:mk651b4a346cb6b65a98f292d471b5ea2ee1b039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:15:53.378432   29396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key
	I0729 23:15:53.378498   29396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key
	I0729 23:15:53.378511   29396 certs.go:256] generating profile certs ...
	I0729 23:15:53.378560   29396 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.key
	I0729 23:15:53.378574   29396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.crt with IP's: []
	I0729 23:15:53.439745   29396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.crt ...
	I0729 23:15:53.439770   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.crt: {Name:mk3680a79602e99b9ae91e80b8b2de160b5edb69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:15:53.439957   29396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.key ...
	I0729 23:15:53.439970   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.key: {Name:mk0e13c40ecb7926570f1b67b7773d1f6d768c18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:15:53.440072   29396 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.baf0f774
	I0729 23:15:53.440088   29396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.baf0f774 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.113 192.168.39.254]
	I0729 23:15:53.572007   29396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.baf0f774 ...
	I0729 23:15:53.572036   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.baf0f774: {Name:mke75c2770be3b25f9eacede6606aa30a2dd64eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:15:53.572192   29396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.baf0f774 ...
	I0729 23:15:53.572204   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.baf0f774: {Name:mke89460e234eb62312d64b4c7839272bd34a2fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:15:53.572269   29396 certs.go:381] copying /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.baf0f774 -> /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt
	I0729 23:15:53.572358   29396 certs.go:385] copying /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.baf0f774 -> /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key
	I0729 23:15:53.572414   29396 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key
	I0729 23:15:53.572429   29396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt with IP's: []
	I0729 23:15:53.966977   29396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt ...
	I0729 23:15:53.967010   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt: {Name:mk18942f8388406e75b85f575e1d984b1dcf1e12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:15:53.967185   29396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key ...
	I0729 23:15:53.967198   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key: {Name:mkf68b432adec4f2b6ef250568be4da083135a86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:15:53.967262   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 23:15:53.967278   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 23:15:53.967289   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 23:15:53.967302   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 23:15:53.967314   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 23:15:53.967326   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 23:15:53.967349   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 23:15:53.967361   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 23:15:53.967411   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem (1338 bytes)
	W0729 23:15:53.967442   29396 certs.go:480] ignoring /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411_empty.pem, impossibly tiny 0 bytes
	I0729 23:15:53.967450   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 23:15:53.967476   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem (1078 bytes)
	I0729 23:15:53.967497   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem (1123 bytes)
	I0729 23:15:53.967520   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem (1675 bytes)
	I0729 23:15:53.967590   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem (1708 bytes)
	I0729 23:15:53.967618   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> /usr/share/ca-certificates/194112.pem
	I0729 23:15:53.967638   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:15:53.967650   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem -> /usr/share/ca-certificates/19411.pem
	I0729 23:15:53.968167   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 23:15:53.998671   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 23:15:54.024684   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 23:15:54.050581   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 23:15:54.075445   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 23:15:54.100891   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 23:15:54.125857   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 23:15:54.153786   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 23:15:54.178024   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem --> /usr/share/ca-certificates/194112.pem (1708 bytes)
	I0729 23:15:54.202191   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 23:15:54.234228   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem --> /usr/share/ca-certificates/19411.pem (1338 bytes)
	I0729 23:15:54.259155   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 23:15:54.279935   29396 ssh_runner.go:195] Run: openssl version
	I0729 23:15:54.286143   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 23:15:54.297792   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:15:54.302641   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:15:54.302708   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:15:54.309137   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 23:15:54.320807   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19411.pem && ln -fs /usr/share/ca-certificates/19411.pem /etc/ssl/certs/19411.pem"
	I0729 23:15:54.332599   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19411.pem
	I0729 23:15:54.337569   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:11 /usr/share/ca-certificates/19411.pem
	I0729 23:15:54.337630   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19411.pem
	I0729 23:15:54.343919   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19411.pem /etc/ssl/certs/51391683.0"
	I0729 23:15:54.355853   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/194112.pem && ln -fs /usr/share/ca-certificates/194112.pem /etc/ssl/certs/194112.pem"
	I0729 23:15:54.367547   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/194112.pem
	I0729 23:15:54.372348   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:11 /usr/share/ca-certificates/194112.pem
	I0729 23:15:54.372401   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/194112.pem
	I0729 23:15:54.378585   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/194112.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 23:15:54.389969   29396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 23:15:54.394421   29396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 23:15:54.394477   29396 kubeadm.go:392] StartCluster: {Name:ha-238496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:15:54.394575   29396 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 23:15:54.415998   29396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 23:15:54.426580   29396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 23:15:54.436898   29396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 23:15:54.447198   29396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 23:15:54.447220   29396 kubeadm.go:157] found existing configuration files:
	
	I0729 23:15:54.447262   29396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 23:15:54.456798   29396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 23:15:54.456855   29396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 23:15:54.467435   29396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 23:15:54.477135   29396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 23:15:54.477189   29396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 23:15:54.487320   29396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 23:15:54.497272   29396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 23:15:54.497330   29396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 23:15:54.507122   29396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 23:15:54.516249   29396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 23:15:54.516304   29396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 23:15:54.526031   29396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 23:15:54.762624   29396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 23:16:06.821168   29396 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 23:16:06.821243   29396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 23:16:06.821329   29396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 23:16:06.821461   29396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 23:16:06.821619   29396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 23:16:06.821719   29396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 23:16:06.823115   29396 out.go:204]   - Generating certificates and keys ...
	I0729 23:16:06.823208   29396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 23:16:06.823294   29396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 23:16:06.823360   29396 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 23:16:06.823424   29396 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 23:16:06.823504   29396 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 23:16:06.823547   29396 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 23:16:06.823597   29396 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 23:16:06.823701   29396 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-238496 localhost] and IPs [192.168.39.113 127.0.0.1 ::1]
	I0729 23:16:06.823767   29396 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 23:16:06.823882   29396 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-238496 localhost] and IPs [192.168.39.113 127.0.0.1 ::1]
	I0729 23:16:06.823941   29396 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 23:16:06.823993   29396 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 23:16:06.824033   29396 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 23:16:06.824082   29396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 23:16:06.824128   29396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 23:16:06.824175   29396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 23:16:06.824232   29396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 23:16:06.824285   29396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 23:16:06.824356   29396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 23:16:06.824461   29396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 23:16:06.824544   29396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 23:16:06.825608   29396 out.go:204]   - Booting up control plane ...
	I0729 23:16:06.825690   29396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 23:16:06.825768   29396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 23:16:06.825823   29396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 23:16:06.825908   29396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 23:16:06.825995   29396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 23:16:06.826045   29396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 23:16:06.826170   29396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 23:16:06.826238   29396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 23:16:06.826287   29396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.29539ms
	I0729 23:16:06.826345   29396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 23:16:06.826398   29396 kubeadm.go:310] [api-check] The API server is healthy after 6.502060506s
	I0729 23:16:06.826538   29396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 23:16:06.826742   29396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 23:16:06.826829   29396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 23:16:06.826991   29396 kubeadm.go:310] [mark-control-plane] Marking the node ha-238496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 23:16:06.827039   29396 kubeadm.go:310] [bootstrap-token] Using token: 7wps4h.r9ujmgl0smjas4sr
	I0729 23:16:06.829264   29396 out.go:204]   - Configuring RBAC rules ...
	I0729 23:16:06.829363   29396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 23:16:06.829433   29396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 23:16:06.829579   29396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 23:16:06.829845   29396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 23:16:06.829982   29396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 23:16:06.830088   29396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 23:16:06.830228   29396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 23:16:06.830291   29396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 23:16:06.830365   29396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 23:16:06.830381   29396 kubeadm.go:310] 
	I0729 23:16:06.830443   29396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 23:16:06.830450   29396 kubeadm.go:310] 
	I0729 23:16:06.830522   29396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 23:16:06.830529   29396 kubeadm.go:310] 
	I0729 23:16:06.830558   29396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 23:16:06.830627   29396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 23:16:06.830705   29396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 23:16:06.830715   29396 kubeadm.go:310] 
	I0729 23:16:06.830769   29396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 23:16:06.830776   29396 kubeadm.go:310] 
	I0729 23:16:06.830814   29396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 23:16:06.830820   29396 kubeadm.go:310] 
	I0729 23:16:06.830861   29396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 23:16:06.830952   29396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 23:16:06.831064   29396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 23:16:06.831078   29396 kubeadm.go:310] 
	I0729 23:16:06.831177   29396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 23:16:06.831277   29396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 23:16:06.831285   29396 kubeadm.go:310] 
	I0729 23:16:06.831394   29396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7wps4h.r9ujmgl0smjas4sr \
	I0729 23:16:06.831519   29396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da4124175dbd4d7966590c68bf3c2627d9fda969ee89096732ee7fd4a463dd4a \
	I0729 23:16:06.831550   29396 kubeadm.go:310] 	--control-plane 
	I0729 23:16:06.831564   29396 kubeadm.go:310] 
	I0729 23:16:06.831681   29396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 23:16:06.831690   29396 kubeadm.go:310] 
	I0729 23:16:06.831778   29396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7wps4h.r9ujmgl0smjas4sr \
	I0729 23:16:06.831917   29396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da4124175dbd4d7966590c68bf3c2627d9fda969ee89096732ee7fd4a463dd4a 
	I0729 23:16:06.831929   29396 cni.go:84] Creating CNI manager for ""
	I0729 23:16:06.831939   29396 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 23:16:06.833425   29396 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 23:16:06.834468   29396 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 23:16:06.840942   29396 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 23:16:06.840959   29396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 23:16:06.859861   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 23:16:07.234421   29396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 23:16:07.234510   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:07.234551   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-238496 minikube.k8s.io/updated_at=2024_07_29T23_16_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3 minikube.k8s.io/name=ha-238496 minikube.k8s.io/primary=true
	I0729 23:16:07.385204   29396 ops.go:34] apiserver oom_adj: -16
	I0729 23:16:07.385357   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:07.885788   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:08.385838   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:08.885780   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:09.386269   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:09.886246   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:10.385618   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:10.886204   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:11.385914   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:11.885786   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:12.385963   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:12.886207   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:13.386172   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:13.886192   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:14.386169   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:14.885669   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:15.385587   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:15.886172   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:16.385515   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:16.886237   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:17.385987   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:17.885376   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:18.386069   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 23:16:18.536657   29396 kubeadm.go:1113] duration metric: took 11.302209455s to wait for elevateKubeSystemPrivileges
	I0729 23:16:18.536711   29396 kubeadm.go:394] duration metric: took 24.142230868s to StartCluster
	I0729 23:16:18.536734   29396 settings.go:142] acquiring lock: {Name:mk17e1ab030b9e2103931d17b9ef30ea797bca5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:16:18.536844   29396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:16:18.537722   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/kubeconfig: {Name:mkcda89ba949a6d5877faacf6424d912f9a0066b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:16:18.538196   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 23:16:18.538213   29396 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 23:16:18.538246   29396 start.go:241] waiting for startup goroutines ...
	I0729 23:16:18.538255   29396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 23:16:18.538327   29396 addons.go:69] Setting storage-provisioner=true in profile "ha-238496"
	I0729 23:16:18.538338   29396 addons.go:69] Setting default-storageclass=true in profile "ha-238496"
	I0729 23:16:18.538360   29396 addons.go:234] Setting addon storage-provisioner=true in "ha-238496"
	I0729 23:16:18.538387   29396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-238496"
	I0729 23:16:18.538395   29396 host.go:66] Checking if "ha-238496" exists ...
	I0729 23:16:18.538427   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:16:18.538830   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:16:18.538858   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:16:18.538862   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:16:18.538893   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:16:18.554878   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0729 23:16:18.554965   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0729 23:16:18.555294   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:16:18.555345   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:16:18.555846   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:16:18.555872   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:16:18.555977   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:16:18.555996   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:16:18.556210   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:16:18.556388   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:16:18.556437   29396 main.go:141] libmachine: (ha-238496) Calling .GetState
	I0729 23:16:18.556923   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:16:18.556956   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:16:18.558602   29396 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:16:18.558947   29396 kapi.go:59] client config for ha-238496: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.crt", KeyFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.key", CAFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 23:16:18.559506   29396 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 23:16:18.559711   29396 addons.go:234] Setting addon default-storageclass=true in "ha-238496"
	I0729 23:16:18.559753   29396 host.go:66] Checking if "ha-238496" exists ...
	I0729 23:16:18.560124   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:16:18.560157   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:16:18.573011   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
	I0729 23:16:18.573508   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:16:18.574041   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:16:18.574064   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:16:18.574405   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:16:18.574600   29396 main.go:141] libmachine: (ha-238496) Calling .GetState
	I0729 23:16:18.574987   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0729 23:16:18.575303   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:16:18.575763   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:16:18.575790   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:16:18.576170   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:16:18.576559   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:16:18.576789   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:16:18.576832   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:16:18.578472   29396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 23:16:18.579868   29396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 23:16:18.579890   29396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 23:16:18.579912   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:16:18.583493   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:16:18.583967   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:16:18.584002   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:16:18.584142   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:16:18.584345   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:16:18.584537   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:16:18.584699   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:16:18.592521   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I0729 23:16:18.592939   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:16:18.593360   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:16:18.593380   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:16:18.593709   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:16:18.593904   29396 main.go:141] libmachine: (ha-238496) Calling .GetState
	I0729 23:16:18.595554   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:16:18.595801   29396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 23:16:18.595819   29396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 23:16:18.595836   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:16:18.598908   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:16:18.599463   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:16:18.599487   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:16:18.599655   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:16:18.599843   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:16:18.600010   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:16:18.600114   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:16:18.672751   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 23:16:18.744494   29396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 23:16:18.790001   29396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 23:16:19.024537   29396 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 23:16:19.336491   29396 main.go:141] libmachine: Making call to close driver server
	I0729 23:16:19.336518   29396 main.go:141] libmachine: (ha-238496) Calling .Close
	I0729 23:16:19.336625   29396 main.go:141] libmachine: Making call to close driver server
	I0729 23:16:19.336646   29396 main.go:141] libmachine: (ha-238496) Calling .Close
	I0729 23:16:19.336824   29396 main.go:141] libmachine: Successfully made call to close driver server
	I0729 23:16:19.336842   29396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 23:16:19.336850   29396 main.go:141] libmachine: Making call to close driver server
	I0729 23:16:19.336857   29396 main.go:141] libmachine: (ha-238496) Calling .Close
	I0729 23:16:19.336884   29396 main.go:141] libmachine: Successfully made call to close driver server
	I0729 23:16:19.336905   29396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 23:16:19.336938   29396 main.go:141] libmachine: Making call to close driver server
	I0729 23:16:19.336950   29396 main.go:141] libmachine: (ha-238496) Calling .Close
	I0729 23:16:19.337040   29396 main.go:141] libmachine: Successfully made call to close driver server
	I0729 23:16:19.337052   29396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 23:16:19.337159   29396 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 23:16:19.337165   29396 round_trippers.go:469] Request Headers:
	I0729 23:16:19.337176   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:16:19.337182   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:16:19.337298   29396 main.go:141] libmachine: (ha-238496) DBG | Closing plugin on server side
	I0729 23:16:19.337357   29396 main.go:141] libmachine: Successfully made call to close driver server
	I0729 23:16:19.337379   29396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 23:16:19.358829   29396 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0729 23:16:19.359997   29396 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 23:16:19.360011   29396 round_trippers.go:469] Request Headers:
	I0729 23:16:19.360019   29396 round_trippers.go:473]     Content-Type: application/json
	I0729 23:16:19.360024   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:16:19.360029   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:16:19.365893   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:16:19.366053   29396 main.go:141] libmachine: Making call to close driver server
	I0729 23:16:19.366070   29396 main.go:141] libmachine: (ha-238496) Calling .Close
	I0729 23:16:19.366340   29396 main.go:141] libmachine: Successfully made call to close driver server
	I0729 23:16:19.366357   29396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 23:16:19.368339   29396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 23:16:19.369578   29396 addons.go:510] duration metric: took 831.320462ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 23:16:19.369612   29396 start.go:246] waiting for cluster config update ...
	I0729 23:16:19.369627   29396 start.go:255] writing updated cluster config ...
	I0729 23:16:19.371229   29396 out.go:177] 
	I0729 23:16:19.372506   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:16:19.372569   29396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json ...
	I0729 23:16:19.374125   29396 out.go:177] * Starting "ha-238496-m02" control-plane node in "ha-238496" cluster
	I0729 23:16:19.375196   29396 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 23:16:19.375218   29396 cache.go:56] Caching tarball of preloaded images
	I0729 23:16:19.375315   29396 preload.go:172] Found /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 23:16:19.375329   29396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 23:16:19.375392   29396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json ...
	I0729 23:16:19.375562   29396 start.go:360] acquireMachinesLock for ha-238496-m02: {Name:mk79fbc287386032c39e512567e9786663e657a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 23:16:19.375605   29396 start.go:364] duration metric: took 24.398µs to acquireMachinesLock for "ha-238496-m02"
	I0729 23:16:19.375625   29396 start.go:93] Provisioning new machine with config: &{Name:ha-238496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 23:16:19.375695   29396 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 23:16:19.377072   29396 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 23:16:19.377141   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:16:19.377163   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:16:19.391837   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0729 23:16:19.392321   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:16:19.392882   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:16:19.392905   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:16:19.393200   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:16:19.393416   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetMachineName
	I0729 23:16:19.393559   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:19.393723   29396 start.go:159] libmachine.API.Create for "ha-238496" (driver="kvm2")
	I0729 23:16:19.393744   29396 client.go:168] LocalClient.Create starting
	I0729 23:16:19.393768   29396 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem
	I0729 23:16:19.393798   29396 main.go:141] libmachine: Decoding PEM data...
	I0729 23:16:19.393811   29396 main.go:141] libmachine: Parsing certificate...
	I0729 23:16:19.393860   29396 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem
	I0729 23:16:19.393878   29396 main.go:141] libmachine: Decoding PEM data...
	I0729 23:16:19.393887   29396 main.go:141] libmachine: Parsing certificate...
	I0729 23:16:19.393902   29396 main.go:141] libmachine: Running pre-create checks...
	I0729 23:16:19.393909   29396 main.go:141] libmachine: (ha-238496-m02) Calling .PreCreateCheck
	I0729 23:16:19.394061   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetConfigRaw
	I0729 23:16:19.394404   29396 main.go:141] libmachine: Creating machine...
	I0729 23:16:19.394417   29396 main.go:141] libmachine: (ha-238496-m02) Calling .Create
	I0729 23:16:19.394564   29396 main.go:141] libmachine: (ha-238496-m02) Creating KVM machine...
	I0729 23:16:19.395809   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found existing default KVM network
	I0729 23:16:19.395989   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found existing private KVM network mk-ha-238496
	I0729 23:16:19.396145   29396 main.go:141] libmachine: (ha-238496-m02) Setting up store path in /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02 ...
	I0729 23:16:19.396168   29396 main.go:141] libmachine: (ha-238496-m02) Building disk image from file:///home/jenkins/minikube-integration/19347-12221/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 23:16:19.396233   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:19.396145   29816 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:16:19.396350   29396 main.go:141] libmachine: (ha-238496-m02) Downloading /home/jenkins/minikube-integration/19347-12221/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19347-12221/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 23:16:19.672295   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:19.672179   29816 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa...
	I0729 23:16:19.900328   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:19.900217   29816 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/ha-238496-m02.rawdisk...
	I0729 23:16:19.900362   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Writing magic tar header
	I0729 23:16:19.900378   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Writing SSH key tar header
	I0729 23:16:19.900389   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:19.900329   29816 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02 ...
	I0729 23:16:19.900474   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02
	I0729 23:16:19.900496   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube/machines
	I0729 23:16:19.900510   29396 main.go:141] libmachine: (ha-238496-m02) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02 (perms=drwx------)
	I0729 23:16:19.900557   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:16:19.900590   29396 main.go:141] libmachine: (ha-238496-m02) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube/machines (perms=drwxr-xr-x)
	I0729 23:16:19.900603   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221
	I0729 23:16:19.900616   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 23:16:19.900627   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 23:16:19.900638   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Checking permissions on dir: /home
	I0729 23:16:19.900646   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Skipping /home - not owner
	I0729 23:16:19.900664   29396 main.go:141] libmachine: (ha-238496-m02) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube (perms=drwxr-xr-x)
	I0729 23:16:19.900681   29396 main.go:141] libmachine: (ha-238496-m02) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221 (perms=drwxrwxr-x)
	I0729 23:16:19.900695   29396 main.go:141] libmachine: (ha-238496-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 23:16:19.900706   29396 main.go:141] libmachine: (ha-238496-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 23:16:19.900738   29396 main.go:141] libmachine: (ha-238496-m02) Creating domain...
	I0729 23:16:19.901507   29396 main.go:141] libmachine: (ha-238496-m02) define libvirt domain using xml: 
	I0729 23:16:19.901527   29396 main.go:141] libmachine: (ha-238496-m02) <domain type='kvm'>
	I0729 23:16:19.901553   29396 main.go:141] libmachine: (ha-238496-m02)   <name>ha-238496-m02</name>
	I0729 23:16:19.901576   29396 main.go:141] libmachine: (ha-238496-m02)   <memory unit='MiB'>2200</memory>
	I0729 23:16:19.901588   29396 main.go:141] libmachine: (ha-238496-m02)   <vcpu>2</vcpu>
	I0729 23:16:19.901596   29396 main.go:141] libmachine: (ha-238496-m02)   <features>
	I0729 23:16:19.901614   29396 main.go:141] libmachine: (ha-238496-m02)     <acpi/>
	I0729 23:16:19.901625   29396 main.go:141] libmachine: (ha-238496-m02)     <apic/>
	I0729 23:16:19.901631   29396 main.go:141] libmachine: (ha-238496-m02)     <pae/>
	I0729 23:16:19.901637   29396 main.go:141] libmachine: (ha-238496-m02)     
	I0729 23:16:19.901642   29396 main.go:141] libmachine: (ha-238496-m02)   </features>
	I0729 23:16:19.901647   29396 main.go:141] libmachine: (ha-238496-m02)   <cpu mode='host-passthrough'>
	I0729 23:16:19.901652   29396 main.go:141] libmachine: (ha-238496-m02)   
	I0729 23:16:19.901656   29396 main.go:141] libmachine: (ha-238496-m02)   </cpu>
	I0729 23:16:19.901661   29396 main.go:141] libmachine: (ha-238496-m02)   <os>
	I0729 23:16:19.901666   29396 main.go:141] libmachine: (ha-238496-m02)     <type>hvm</type>
	I0729 23:16:19.901671   29396 main.go:141] libmachine: (ha-238496-m02)     <boot dev='cdrom'/>
	I0729 23:16:19.901676   29396 main.go:141] libmachine: (ha-238496-m02)     <boot dev='hd'/>
	I0729 23:16:19.901681   29396 main.go:141] libmachine: (ha-238496-m02)     <bootmenu enable='no'/>
	I0729 23:16:19.901685   29396 main.go:141] libmachine: (ha-238496-m02)   </os>
	I0729 23:16:19.901690   29396 main.go:141] libmachine: (ha-238496-m02)   <devices>
	I0729 23:16:19.901695   29396 main.go:141] libmachine: (ha-238496-m02)     <disk type='file' device='cdrom'>
	I0729 23:16:19.901703   29396 main.go:141] libmachine: (ha-238496-m02)       <source file='/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/boot2docker.iso'/>
	I0729 23:16:19.901712   29396 main.go:141] libmachine: (ha-238496-m02)       <target dev='hdc' bus='scsi'/>
	I0729 23:16:19.901717   29396 main.go:141] libmachine: (ha-238496-m02)       <readonly/>
	I0729 23:16:19.901721   29396 main.go:141] libmachine: (ha-238496-m02)     </disk>
	I0729 23:16:19.901730   29396 main.go:141] libmachine: (ha-238496-m02)     <disk type='file' device='disk'>
	I0729 23:16:19.901735   29396 main.go:141] libmachine: (ha-238496-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 23:16:19.901743   29396 main.go:141] libmachine: (ha-238496-m02)       <source file='/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/ha-238496-m02.rawdisk'/>
	I0729 23:16:19.901747   29396 main.go:141] libmachine: (ha-238496-m02)       <target dev='hda' bus='virtio'/>
	I0729 23:16:19.901752   29396 main.go:141] libmachine: (ha-238496-m02)     </disk>
	I0729 23:16:19.901756   29396 main.go:141] libmachine: (ha-238496-m02)     <interface type='network'>
	I0729 23:16:19.901761   29396 main.go:141] libmachine: (ha-238496-m02)       <source network='mk-ha-238496'/>
	I0729 23:16:19.901766   29396 main.go:141] libmachine: (ha-238496-m02)       <model type='virtio'/>
	I0729 23:16:19.901771   29396 main.go:141] libmachine: (ha-238496-m02)     </interface>
	I0729 23:16:19.901775   29396 main.go:141] libmachine: (ha-238496-m02)     <interface type='network'>
	I0729 23:16:19.901782   29396 main.go:141] libmachine: (ha-238496-m02)       <source network='default'/>
	I0729 23:16:19.901793   29396 main.go:141] libmachine: (ha-238496-m02)       <model type='virtio'/>
	I0729 23:16:19.901801   29396 main.go:141] libmachine: (ha-238496-m02)     </interface>
	I0729 23:16:19.901808   29396 main.go:141] libmachine: (ha-238496-m02)     <serial type='pty'>
	I0729 23:16:19.901816   29396 main.go:141] libmachine: (ha-238496-m02)       <target port='0'/>
	I0729 23:16:19.901821   29396 main.go:141] libmachine: (ha-238496-m02)     </serial>
	I0729 23:16:19.901826   29396 main.go:141] libmachine: (ha-238496-m02)     <console type='pty'>
	I0729 23:16:19.901831   29396 main.go:141] libmachine: (ha-238496-m02)       <target type='serial' port='0'/>
	I0729 23:16:19.901836   29396 main.go:141] libmachine: (ha-238496-m02)     </console>
	I0729 23:16:19.901840   29396 main.go:141] libmachine: (ha-238496-m02)     <rng model='virtio'>
	I0729 23:16:19.901850   29396 main.go:141] libmachine: (ha-238496-m02)       <backend model='random'>/dev/random</backend>
	I0729 23:16:19.901854   29396 main.go:141] libmachine: (ha-238496-m02)     </rng>
	I0729 23:16:19.901858   29396 main.go:141] libmachine: (ha-238496-m02)     
	I0729 23:16:19.901864   29396 main.go:141] libmachine: (ha-238496-m02)     
	I0729 23:16:19.901872   29396 main.go:141] libmachine: (ha-238496-m02)   </devices>
	I0729 23:16:19.901878   29396 main.go:141] libmachine: (ha-238496-m02) </domain>
	I0729 23:16:19.901892   29396 main.go:141] libmachine: (ha-238496-m02) 
	I0729 23:16:19.908728   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:2a:9b:e1 in network default
	I0729 23:16:19.909244   29396 main.go:141] libmachine: (ha-238496-m02) Ensuring networks are active...
	I0729 23:16:19.909288   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:19.910028   29396 main.go:141] libmachine: (ha-238496-m02) Ensuring network default is active
	I0729 23:16:19.910449   29396 main.go:141] libmachine: (ha-238496-m02) Ensuring network mk-ha-238496 is active
	I0729 23:16:19.910816   29396 main.go:141] libmachine: (ha-238496-m02) Getting domain xml...
	I0729 23:16:19.911569   29396 main.go:141] libmachine: (ha-238496-m02) Creating domain...
	I0729 23:16:21.122194   29396 main.go:141] libmachine: (ha-238496-m02) Waiting to get IP...
	I0729 23:16:21.123143   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:21.123515   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:21.123536   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:21.123490   29816 retry.go:31] will retry after 210.542983ms: waiting for machine to come up
	I0729 23:16:21.335988   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:21.336378   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:21.336408   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:21.336354   29816 retry.go:31] will retry after 291.309738ms: waiting for machine to come up
	I0729 23:16:21.628887   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:21.629304   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:21.629331   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:21.629272   29816 retry.go:31] will retry after 460.631998ms: waiting for machine to come up
	I0729 23:16:22.092069   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:22.092559   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:22.092587   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:22.092518   29816 retry.go:31] will retry after 374.861132ms: waiting for machine to come up
	I0729 23:16:22.469027   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:22.469518   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:22.469546   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:22.469478   29816 retry.go:31] will retry after 604.947482ms: waiting for machine to come up
	I0729 23:16:23.076290   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:23.076777   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:23.076802   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:23.076724   29816 retry.go:31] will retry after 806.329173ms: waiting for machine to come up
	I0729 23:16:23.884406   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:23.884817   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:23.884846   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:23.884808   29816 retry.go:31] will retry after 803.379339ms: waiting for machine to come up
	I0729 23:16:24.689636   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:24.690078   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:24.690108   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:24.690039   29816 retry.go:31] will retry after 1.280518832s: waiting for machine to come up
	I0729 23:16:25.972490   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:25.972999   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:25.973030   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:25.972949   29816 retry.go:31] will retry after 1.549162667s: waiting for machine to come up
	I0729 23:16:27.523891   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:27.524360   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:27.524384   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:27.524311   29816 retry.go:31] will retry after 1.581798428s: waiting for machine to come up
	I0729 23:16:29.107873   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:29.108320   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:29.108343   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:29.108277   29816 retry.go:31] will retry after 1.968794912s: waiting for machine to come up
	I0729 23:16:31.078415   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:31.078906   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:31.078936   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:31.078857   29816 retry.go:31] will retry after 2.58499227s: waiting for machine to come up
	I0729 23:16:33.665171   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:33.665534   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:33.665559   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:33.665508   29816 retry.go:31] will retry after 4.074814902s: waiting for machine to come up
	I0729 23:16:37.743773   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:37.744169   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find current IP address of domain ha-238496-m02 in network mk-ha-238496
	I0729 23:16:37.744199   29396 main.go:141] libmachine: (ha-238496-m02) DBG | I0729 23:16:37.744119   29816 retry.go:31] will retry after 4.097801489s: waiting for machine to come up
	I0729 23:16:41.845420   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:41.845844   29396 main.go:141] libmachine: (ha-238496-m02) Found IP for machine: 192.168.39.226
	I0729 23:16:41.845892   29396 main.go:141] libmachine: (ha-238496-m02) Reserving static IP address...
	I0729 23:16:41.845907   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has current primary IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:41.846301   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find host DHCP lease matching {name: "ha-238496-m02", mac: "52:54:00:15:f5:ca", ip: "192.168.39.226"} in network mk-ha-238496
	I0729 23:16:41.924834   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Getting to WaitForSSH function...
	I0729 23:16:41.924862   29396 main.go:141] libmachine: (ha-238496-m02) Reserved static IP address: 192.168.39.226
	I0729 23:16:41.924875   29396 main.go:141] libmachine: (ha-238496-m02) Waiting for SSH to be available...
	I0729 23:16:41.926998   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:41.927263   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496
	I0729 23:16:41.927302   29396 main.go:141] libmachine: (ha-238496-m02) DBG | unable to find defined IP address of network mk-ha-238496 interface with MAC address 52:54:00:15:f5:ca
	I0729 23:16:41.927427   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Using SSH client type: external
	I0729 23:16:41.927457   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa (-rw-------)
	I0729 23:16:41.927501   29396 main.go:141] libmachine: (ha-238496-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 23:16:41.927520   29396 main.go:141] libmachine: (ha-238496-m02) DBG | About to run SSH command:
	I0729 23:16:41.927537   29396 main.go:141] libmachine: (ha-238496-m02) DBG | exit 0
	I0729 23:16:41.931361   29396 main.go:141] libmachine: (ha-238496-m02) DBG | SSH cmd err, output: exit status 255: 
	I0729 23:16:41.931392   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 23:16:41.931404   29396 main.go:141] libmachine: (ha-238496-m02) DBG | command : exit 0
	I0729 23:16:41.931412   29396 main.go:141] libmachine: (ha-238496-m02) DBG | err     : exit status 255
	I0729 23:16:41.931422   29396 main.go:141] libmachine: (ha-238496-m02) DBG | output  : 
	I0729 23:16:44.931808   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Getting to WaitForSSH function...
	I0729 23:16:44.934574   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:44.934999   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:44.935029   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:44.935130   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Using SSH client type: external
	I0729 23:16:44.935160   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa (-rw-------)
	I0729 23:16:44.935214   29396 main.go:141] libmachine: (ha-238496-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 23:16:44.935254   29396 main.go:141] libmachine: (ha-238496-m02) DBG | About to run SSH command:
	I0729 23:16:44.935283   29396 main.go:141] libmachine: (ha-238496-m02) DBG | exit 0
	I0729 23:16:45.058642   29396 main.go:141] libmachine: (ha-238496-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 23:16:45.058949   29396 main.go:141] libmachine: (ha-238496-m02) KVM machine creation complete!
	I0729 23:16:45.059234   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetConfigRaw
	I0729 23:16:45.059788   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:45.059995   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:45.060119   29396 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 23:16:45.060135   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetState
	I0729 23:16:45.061649   29396 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 23:16:45.061663   29396 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 23:16:45.061668   29396 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 23:16:45.061674   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:45.064202   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.064562   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.064588   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.064706   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:45.064883   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.065030   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.065154   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:45.065291   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:16:45.065502   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0729 23:16:45.065516   29396 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 23:16:45.166119   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 23:16:45.166141   29396 main.go:141] libmachine: Detecting the provisioner...
	I0729 23:16:45.166149   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:45.168854   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.169205   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.169234   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.169346   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:45.169578   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.169731   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.169874   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:45.170021   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:16:45.170177   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0729 23:16:45.170196   29396 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 23:16:45.275789   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 23:16:45.275875   29396 main.go:141] libmachine: found compatible host: buildroot
	I0729 23:16:45.275890   29396 main.go:141] libmachine: Provisioning with buildroot...
	I0729 23:16:45.275904   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetMachineName
	I0729 23:16:45.276160   29396 buildroot.go:166] provisioning hostname "ha-238496-m02"
	I0729 23:16:45.276180   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetMachineName
	I0729 23:16:45.276336   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:45.278898   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.279211   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.279232   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.279339   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:45.279533   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.279682   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.279823   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:45.280005   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:16:45.280214   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0729 23:16:45.280232   29396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-238496-m02 && echo "ha-238496-m02" | sudo tee /etc/hostname
	I0729 23:16:45.398012   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-238496-m02
	
	I0729 23:16:45.398033   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:45.400557   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.400983   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.401017   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.401186   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:45.401376   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.401565   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.401720   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:45.401909   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:16:45.402091   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0729 23:16:45.402113   29396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-238496-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-238496-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-238496-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 23:16:45.511766   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 23:16:45.511797   29396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19347-12221/.minikube CaCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19347-12221/.minikube}
	I0729 23:16:45.511811   29396 buildroot.go:174] setting up certificates
	I0729 23:16:45.511819   29396 provision.go:84] configureAuth start
	I0729 23:16:45.511827   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetMachineName
	I0729 23:16:45.512093   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetIP
	I0729 23:16:45.514793   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.515163   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.515185   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.515367   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:45.517391   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.517686   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.517713   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.517853   29396 provision.go:143] copyHostCerts
	I0729 23:16:45.517886   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem
	I0729 23:16:45.517926   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem, removing ...
	I0729 23:16:45.517936   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem
	I0729 23:16:45.518017   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem (1675 bytes)
	I0729 23:16:45.518105   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem
	I0729 23:16:45.518130   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem, removing ...
	I0729 23:16:45.518139   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem
	I0729 23:16:45.518175   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem (1078 bytes)
	I0729 23:16:45.518231   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem
	I0729 23:16:45.518256   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem, removing ...
	I0729 23:16:45.518264   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem
	I0729 23:16:45.518293   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem (1123 bytes)
	I0729 23:16:45.518359   29396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem org=jenkins.ha-238496-m02 san=[127.0.0.1 192.168.39.226 ha-238496-m02 localhost minikube]
	I0729 23:16:45.694511   29396 provision.go:177] copyRemoteCerts
	I0729 23:16:45.694577   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 23:16:45.694604   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:45.697058   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.697459   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.697485   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.697707   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:45.697873   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.698021   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:45.698146   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa Username:docker}
	I0729 23:16:45.781576   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 23:16:45.781638   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 23:16:45.808822   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 23:16:45.808896   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 23:16:45.834554   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 23:16:45.834635   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 23:16:45.859821   29396 provision.go:87] duration metric: took 347.99007ms to configureAuth
	I0729 23:16:45.859854   29396 buildroot.go:189] setting minikube options for container-runtime
	I0729 23:16:45.860021   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:16:45.860044   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:45.860315   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:45.862902   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.863203   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.863232   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.863404   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:45.863595   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.863762   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.863928   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:45.864072   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:16:45.864287   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0729 23:16:45.864298   29396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 23:16:45.968371   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 23:16:45.968416   29396 buildroot.go:70] root file system type: tmpfs
	I0729 23:16:45.968559   29396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 23:16:45.968584   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:45.971687   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.972162   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:45.972195   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:45.972355   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:45.972555   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.972746   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:45.972878   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:45.973043   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:16:45.973244   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0729 23:16:45.973307   29396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.113"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 23:16:46.094512   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.113
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 23:16:46.094543   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:46.097313   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:46.097672   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:46.097700   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:46.097930   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:46.098131   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:46.098292   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:46.098423   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:46.098570   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:16:46.098775   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0729 23:16:46.098792   29396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 23:16:47.867253   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 23:16:47.867290   29396 main.go:141] libmachine: Checking connection to Docker...
	I0729 23:16:47.867302   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetURL
	I0729 23:16:47.868904   29396 main.go:141] libmachine: (ha-238496-m02) DBG | Using libvirt version 6000000
	I0729 23:16:47.871705   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:47.872324   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:47.872350   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:47.872527   29396 main.go:141] libmachine: Docker is up and running!
	I0729 23:16:47.872542   29396 main.go:141] libmachine: Reticulating splines...
	I0729 23:16:47.872549   29396 client.go:171] duration metric: took 28.478798279s to LocalClient.Create
	I0729 23:16:47.872571   29396 start.go:167] duration metric: took 28.478849364s to libmachine.API.Create "ha-238496"
	I0729 23:16:47.872584   29396 start.go:293] postStartSetup for "ha-238496-m02" (driver="kvm2")
	I0729 23:16:47.872596   29396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 23:16:47.872614   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:47.872875   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 23:16:47.872899   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:47.875125   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:47.875484   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:47.875511   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:47.875666   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:47.875845   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:47.876016   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:47.876187   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa Username:docker}
	I0729 23:16:47.957455   29396 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 23:16:47.961907   29396 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 23:16:47.961934   29396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-12221/.minikube/addons for local assets ...
	I0729 23:16:47.962007   29396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-12221/.minikube/files for local assets ...
	I0729 23:16:47.962103   29396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> 194112.pem in /etc/ssl/certs
	I0729 23:16:47.962115   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> /etc/ssl/certs/194112.pem
	I0729 23:16:47.962226   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 23:16:47.972087   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem --> /etc/ssl/certs/194112.pem (1708 bytes)
	I0729 23:16:48.000323   29396 start.go:296] duration metric: took 127.725465ms for postStartSetup
	I0729 23:16:48.000380   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetConfigRaw
	I0729 23:16:48.000995   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetIP
	I0729 23:16:48.003937   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.004303   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:48.004331   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.004613   29396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json ...
	I0729 23:16:48.004818   29396 start.go:128] duration metric: took 28.62911238s to createHost
	I0729 23:16:48.004843   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:48.007143   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.007517   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:48.007545   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.007747   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:48.007948   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:48.008111   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:48.008270   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:48.008453   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:16:48.008662   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0729 23:16:48.008682   29396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 23:16:48.115978   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722295008.094136082
	
	I0729 23:16:48.115999   29396 fix.go:216] guest clock: 1722295008.094136082
	I0729 23:16:48.116008   29396 fix.go:229] Guest: 2024-07-29 23:16:48.094136082 +0000 UTC Remote: 2024-07-29 23:16:48.004831359 +0000 UTC m=+92.026779250 (delta=89.304723ms)
	I0729 23:16:48.116030   29396 fix.go:200] guest clock delta is within tolerance: 89.304723ms
	I0729 23:16:48.116037   29396 start.go:83] releasing machines lock for "ha-238496-m02", held for 28.740421202s
	I0729 23:16:48.116059   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:48.116307   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetIP
	I0729 23:16:48.118859   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.119193   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:48.119227   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.121181   29396 out.go:177] * Found network options:
	I0729 23:16:48.122795   29396 out.go:177]   - NO_PROXY=192.168.39.113
	W0729 23:16:48.124384   29396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 23:16:48.124408   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:48.124963   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:48.125136   29396 main.go:141] libmachine: (ha-238496-m02) Calling .DriverName
	I0729 23:16:48.125213   29396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 23:16:48.125257   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	W0729 23:16:48.125325   29396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 23:16:48.125413   29396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 23:16:48.125435   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHHostname
	I0729 23:16:48.127820   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.128046   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.128181   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:48.128211   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.128326   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:48.128488   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:48.128514   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:48.128534   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:48.128641   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:48.128709   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHPort
	I0729 23:16:48.128790   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa Username:docker}
	I0729 23:16:48.128837   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHKeyPath
	I0729 23:16:48.128942   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetSSHUsername
	I0729 23:16:48.129104   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m02/id_rsa Username:docker}
	W0729 23:16:48.205896   29396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 23:16:48.205956   29396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 23:16:48.228230   29396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 23:16:48.228252   29396 start.go:495] detecting cgroup driver to use...
	I0729 23:16:48.228343   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 23:16:48.248044   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0729 23:16:48.261247   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 23:16:48.274271   29396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 23:16:48.274375   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 23:16:48.287313   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 23:16:48.300525   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 23:16:48.313542   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 23:16:48.326487   29396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 23:16:48.339511   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 23:16:48.352454   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 23:16:48.365423   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 23:16:48.378632   29396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 23:16:48.390528   29396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 23:16:48.402554   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:16:48.537443   29396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 23:16:48.562887   29396 start.go:495] detecting cgroup driver to use...
	I0729 23:16:48.562978   29396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 23:16:48.583195   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 23:16:48.600886   29396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 23:16:48.626610   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 23:16:48.640651   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 23:16:48.654131   29396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 23:16:48.685200   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 23:16:48.699226   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 23:16:48.719673   29396 ssh_runner.go:195] Run: which cri-dockerd
	I0729 23:16:48.723763   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 23:16:48.733499   29396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 23:16:48.751414   29396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 23:16:48.878991   29396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 23:16:49.003264   29396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 23:16:49.003303   29396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 23:16:49.021808   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:16:49.153978   29396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 23:16:51.526166   29396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.372153577s)
	I0729 23:16:51.526240   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 23:16:51.540938   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 23:16:51.555750   29396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 23:16:51.682013   29396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 23:16:51.813640   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:16:51.945396   29396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 23:16:51.965941   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 23:16:51.983182   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:16:52.109586   29396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 23:16:52.189812   29396 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 23:16:52.189870   29396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 23:16:52.196237   29396 start.go:563] Will wait 60s for crictl version
	I0729 23:16:52.196309   29396 ssh_runner.go:195] Run: which crictl
	I0729 23:16:52.202212   29396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 23:16:52.244282   29396 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0729 23:16:52.244342   29396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 23:16:52.273830   29396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 23:16:52.298731   29396 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0729 23:16:52.300116   29396 out.go:177]   - env NO_PROXY=192.168.39.113
	I0729 23:16:52.301288   29396 main.go:141] libmachine: (ha-238496-m02) Calling .GetIP
	I0729 23:16:52.304199   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:52.304576   29396 main.go:141] libmachine: (ha-238496-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:ca", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:16:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:ca Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-238496-m02 Clientid:01:52:54:00:15:f5:ca}
	I0729 23:16:52.304610   29396 main.go:141] libmachine: (ha-238496-m02) DBG | domain ha-238496-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:15:f5:ca in network mk-ha-238496
	I0729 23:16:52.304844   29396 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 23:16:52.309522   29396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 23:16:52.322555   29396 mustload.go:65] Loading cluster: ha-238496
	I0729 23:16:52.322770   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:16:52.323008   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:16:52.323033   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:16:52.337685   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0729 23:16:52.338156   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:16:52.338607   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:16:52.338626   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:16:52.338935   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:16:52.339112   29396 main.go:141] libmachine: (ha-238496) Calling .GetState
	I0729 23:16:52.340721   29396 host.go:66] Checking if "ha-238496" exists ...
	I0729 23:16:52.341000   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:16:52.341020   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:16:52.355343   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0729 23:16:52.355792   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:16:52.356236   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:16:52.356257   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:16:52.356549   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:16:52.356710   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:16:52.356844   29396 certs.go:68] Setting up /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496 for IP: 192.168.39.226
	I0729 23:16:52.356855   29396 certs.go:194] generating shared ca certs ...
	I0729 23:16:52.356874   29396 certs.go:226] acquiring lock for ca certs: {Name:mk651b4a346cb6b65a98f292d471b5ea2ee1b039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:16:52.357018   29396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key
	I0729 23:16:52.357071   29396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key
	I0729 23:16:52.357085   29396 certs.go:256] generating profile certs ...
	I0729 23:16:52.357177   29396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.key
	I0729 23:16:52.357209   29396 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.ea71b923
	I0729 23:16:52.357227   29396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.ea71b923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.113 192.168.39.226 192.168.39.254]
	I0729 23:16:52.438426   29396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.ea71b923 ...
	I0729 23:16:52.438451   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.ea71b923: {Name:mk91f223539790286183d375e09336e0661489e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:16:52.438609   29396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.ea71b923 ...
	I0729 23:16:52.438621   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.ea71b923: {Name:mk0395716d5db1f731970b42562c431b469c4d3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:16:52.438718   29396 certs.go:381] copying /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.ea71b923 -> /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt
	I0729 23:16:52.438849   29396 certs.go:385] copying /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.ea71b923 -> /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key
	I0729 23:16:52.438963   29396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key
	I0729 23:16:52.438978   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 23:16:52.438991   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 23:16:52.439004   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 23:16:52.439018   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 23:16:52.439030   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 23:16:52.439042   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 23:16:52.439054   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 23:16:52.439066   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 23:16:52.439111   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem (1338 bytes)
	W0729 23:16:52.439137   29396 certs.go:480] ignoring /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411_empty.pem, impossibly tiny 0 bytes
	I0729 23:16:52.439146   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 23:16:52.439166   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem (1078 bytes)
	I0729 23:16:52.439186   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem (1123 bytes)
	I0729 23:16:52.439206   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem (1675 bytes)
	I0729 23:16:52.439245   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem (1708 bytes)
	I0729 23:16:52.439270   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem -> /usr/share/ca-certificates/19411.pem
	I0729 23:16:52.439283   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> /usr/share/ca-certificates/194112.pem
	I0729 23:16:52.439296   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:16:52.439324   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:16:52.442047   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:16:52.442480   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:16:52.442503   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:16:52.442682   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:16:52.442905   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:16:52.443046   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:16:52.443158   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:16:52.518984   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 23:16:52.523859   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 23:16:52.534773   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 23:16:52.538885   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 23:16:52.549218   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 23:16:52.553615   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 23:16:52.564716   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 23:16:52.568669   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 23:16:52.579330   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 23:16:52.583704   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 23:16:52.597894   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 23:16:52.602188   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 23:16:52.612958   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 23:16:52.638256   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 23:16:52.661756   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 23:16:52.684792   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 23:16:52.707756   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 23:16:52.731576   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 23:16:52.754441   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 23:16:52.777771   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 23:16:52.801003   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem --> /usr/share/ca-certificates/19411.pem (1338 bytes)
	I0729 23:16:52.825430   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem --> /usr/share/ca-certificates/194112.pem (1708 bytes)
	I0729 23:16:52.848979   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 23:16:52.873731   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 23:16:52.890569   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 23:16:52.907546   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 23:16:52.924240   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 23:16:52.940802   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 23:16:52.956658   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 23:16:52.972800   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 23:16:52.989957   29396 ssh_runner.go:195] Run: openssl version
	I0729 23:16:52.995807   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19411.pem && ln -fs /usr/share/ca-certificates/19411.pem /etc/ssl/certs/19411.pem"
	I0729 23:16:53.006586   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19411.pem
	I0729 23:16:53.011272   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:11 /usr/share/ca-certificates/19411.pem
	I0729 23:16:53.011333   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19411.pem
	I0729 23:16:53.017395   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19411.pem /etc/ssl/certs/51391683.0"
	I0729 23:16:53.029003   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/194112.pem && ln -fs /usr/share/ca-certificates/194112.pem /etc/ssl/certs/194112.pem"
	I0729 23:16:53.040176   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/194112.pem
	I0729 23:16:53.044895   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:11 /usr/share/ca-certificates/194112.pem
	I0729 23:16:53.044970   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/194112.pem
	I0729 23:16:53.051179   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/194112.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 23:16:53.063199   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 23:16:53.074351   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:16:53.078889   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:16:53.078951   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:16:53.084785   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 23:16:53.095633   29396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 23:16:53.099717   29396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 23:16:53.099765   29396 kubeadm.go:934] updating node {m02 192.168.39.226 8443 v1.30.3 docker true true} ...
	I0729 23:16:53.099848   29396 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-238496-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 23:16:53.099872   29396 kube-vip.go:115] generating kube-vip config ...
	I0729 23:16:53.099904   29396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 23:16:53.115368   29396 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 23:16:53.115456   29396 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 23:16:53.115507   29396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 23:16:53.125267   29396 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 23:16:53.125324   29396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 23:16:53.134688   29396 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 23:16:53.134720   29396 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 23:16:53.134688   29396 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 23:16:53.134844   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 23:16:53.134934   29396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 23:16:53.139246   29396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 23:16:53.139276   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 23:17:00.942779   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 23:17:00.942853   29396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 23:17:00.947933   29396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 23:17:00.947959   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 23:17:05.522395   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:17:05.538088   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 23:17:05.538185   29396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 23:17:05.542643   29396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 23:17:05.542679   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 23:17:05.957616   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 23:17:05.967186   29396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0729 23:17:05.984857   29396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 23:17:06.004743   29396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 23:17:06.022735   29396 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 23:17:06.026830   29396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 23:17:06.039823   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:17:06.168431   29396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 23:17:06.194567   29396 host.go:66] Checking if "ha-238496" exists ...
	I0729 23:17:06.195059   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:17:06.195120   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:17:06.210453   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0729 23:17:06.210933   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:17:06.211515   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:17:06.211544   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:17:06.211877   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:17:06.212057   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:17:06.212216   29396 start.go:317] joinCluster: &{Name:ha-238496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:17:06.212386   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 23:17:06.212409   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:17:06.215839   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:17:06.216287   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:17:06.216321   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:17:06.216483   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:17:06.216659   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:17:06.216837   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:17:06.216979   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:17:06.426321   29396 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 23:17:06.426369   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z9n94n.tldcwy00xngrv8wr --discovery-token-ca-cert-hash sha256:da4124175dbd4d7966590c68bf3c2627d9fda969ee89096732ee7fd4a463dd4a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-238496-m02 --control-plane --apiserver-advertise-address=192.168.39.226 --apiserver-bind-port=8443"
	I0729 23:17:29.346176   29396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z9n94n.tldcwy00xngrv8wr --discovery-token-ca-cert-hash sha256:da4124175dbd4d7966590c68bf3c2627d9fda969ee89096732ee7fd4a463dd4a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-238496-m02 --control-plane --apiserver-advertise-address=192.168.39.226 --apiserver-bind-port=8443": (22.919780887s)
	I0729 23:17:29.346215   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 23:17:29.882187   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-238496-m02 minikube.k8s.io/updated_at=2024_07_29T23_17_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3 minikube.k8s.io/name=ha-238496 minikube.k8s.io/primary=false
	I0729 23:17:30.030849   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-238496-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 23:17:30.159567   29396 start.go:319] duration metric: took 23.947350002s to joinCluster
	I0729 23:17:30.159629   29396 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 23:17:30.159910   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:17:30.161150   29396 out.go:177] * Verifying Kubernetes components...
	I0729 23:17:30.162759   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:17:30.452541   29396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 23:17:30.485926   29396 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:17:30.486245   29396 kapi.go:59] client config for ha-238496: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.crt", KeyFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.key", CAFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 23:17:30.486335   29396 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.113:8443
	I0729 23:17:30.486630   29396 node_ready.go:35] waiting up to 6m0s for node "ha-238496-m02" to be "Ready" ...
	I0729 23:17:30.486743   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:30.486754   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:30.486765   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:30.486774   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:30.498725   29396 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 23:17:30.986874   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:30.986899   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:30.986910   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:30.986916   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:31.037211   29396 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I0729 23:17:31.487111   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:31.487138   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:31.487149   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:31.487158   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:31.505988   29396 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0729 23:17:31.986880   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:31.986903   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:31.986914   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:31.986923   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:31.995565   29396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 23:17:32.487330   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:32.487360   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:32.487369   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:32.487374   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:32.493333   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:17:32.493741   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:32.986894   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:32.986923   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:32.986934   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:32.986940   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:32.990578   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:33.486980   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:33.487007   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:33.487018   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:33.487026   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:33.490073   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:33.986899   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:33.986918   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:33.986926   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:33.986930   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:33.990326   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:34.487551   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:34.487571   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:34.487578   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:34.487582   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:34.491541   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:34.987623   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:34.987644   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:34.987652   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:34.987656   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:34.991325   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:34.991873   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:35.487183   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:35.487210   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:35.487223   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:35.487229   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:35.491177   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:35.987223   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:35.987248   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:35.987259   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:35.987266   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:35.990525   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:36.486820   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:36.486843   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:36.486855   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:36.486859   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:36.490221   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:36.987151   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:36.987172   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:36.987183   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:36.987189   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:36.990147   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:17:37.486819   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:37.486842   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:37.486849   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:37.486853   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:37.490362   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:37.490858   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:37.987231   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:37.987266   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:37.987278   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:37.987282   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:37.991172   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:38.487456   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:38.487480   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:38.487488   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:38.487492   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:38.491185   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:38.987722   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:38.987746   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:38.987757   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:38.987760   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:38.990953   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:39.487485   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:39.487507   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:39.487515   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:39.487519   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:39.490615   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:39.491413   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:39.987861   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:39.987885   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:39.987894   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:39.987899   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:39.991871   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:40.487060   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:40.487089   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:40.487099   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:40.487104   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:40.490670   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:40.986848   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:40.986878   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:40.986888   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:40.986892   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:40.990180   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:41.487838   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:41.487861   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:41.487868   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:41.487872   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:41.491386   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:41.492362   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:41.987822   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:41.987844   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:41.987852   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:41.987856   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:41.991266   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:42.487015   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:42.487037   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:42.487045   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:42.487049   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:42.491106   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:17:42.986869   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:42.986888   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:42.986896   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:42.986901   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:42.990460   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:43.487128   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:43.487155   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:43.487166   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:43.487171   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:43.490615   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:43.987811   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:43.987833   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:43.987841   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:43.987845   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:43.991933   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:17:43.992489   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:44.487362   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:44.487384   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:44.487392   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:44.487396   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:44.490237   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:17:44.986868   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:44.986890   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:44.986898   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:44.986902   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:44.990865   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:45.487027   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:45.487057   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:45.487067   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:45.487072   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:45.490556   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:45.987818   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:45.987840   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:45.987850   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:45.987857   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:45.993195   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:17:45.993828   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:46.487019   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:46.487042   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:46.487054   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:46.487059   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:46.490270   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:46.987227   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:46.987251   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:46.987259   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:46.987263   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:46.990459   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:47.487319   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:47.487349   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:47.487358   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:47.487363   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:47.490818   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:47.986997   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:47.987019   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:47.987027   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:47.987033   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:47.990157   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:48.486848   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:48.486887   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:48.486898   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:48.486904   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:48.490233   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:48.490756   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:48.987333   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:48.987365   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:48.987373   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:48.987377   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:48.990768   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:49.487593   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:49.487616   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:49.487624   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:49.487628   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:49.490948   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:49.986845   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:49.986868   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:49.986876   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:49.986880   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:49.990586   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:50.487187   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:50.487210   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:50.487219   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:50.487226   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:50.491614   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:17:50.492061   29396 node_ready.go:53] node "ha-238496-m02" has status "Ready":"False"
	I0729 23:17:50.987524   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:50.987546   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:50.987554   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:50.987559   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:50.990568   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:17:51.487617   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:51.487650   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:51.487661   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:51.487666   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:51.491397   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:51.987402   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:51.987423   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:51.987431   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:51.987435   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:51.991650   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:17:52.487727   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:52.487753   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.487764   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.487770   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.491368   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:52.491966   29396 node_ready.go:49] node "ha-238496-m02" has status "Ready":"True"
	I0729 23:17:52.491985   29396 node_ready.go:38] duration metric: took 22.005334623s for node "ha-238496-m02" to be "Ready" ...
	I0729 23:17:52.491993   29396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 23:17:52.492046   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:17:52.492054   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.492061   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.492064   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.504110   29396 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0729 23:17:52.510523   29396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p8nps" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.510635   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p8nps
	I0729 23:17:52.510646   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.510656   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.510665   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.517610   29396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 23:17:52.518186   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:52.518201   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.518208   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.518211   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.526545   29396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 23:17:52.527004   29396 pod_ready.go:92] pod "coredns-7db6d8ff4d-p8nps" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:52.527023   29396 pod_ready.go:81] duration metric: took 16.46732ms for pod "coredns-7db6d8ff4d-p8nps" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.527035   29396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tjplq" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.527102   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tjplq
	I0729 23:17:52.527112   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.527122   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.527129   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.532180   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:17:52.532980   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:52.532994   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.533003   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.533010   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.540147   29396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 23:17:52.540734   29396 pod_ready.go:92] pod "coredns-7db6d8ff4d-tjplq" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:52.540750   29396 pod_ready.go:81] duration metric: took 13.707118ms for pod "coredns-7db6d8ff4d-tjplq" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.540763   29396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.540850   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/etcd-ha-238496
	I0729 23:17:52.540860   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.540868   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.540875   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.546093   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:17:52.546851   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:52.546869   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.546880   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.546886   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.552117   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:17:52.552716   29396 pod_ready.go:92] pod "etcd-ha-238496" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:52.552732   29396 pod_ready.go:81] duration metric: took 11.96269ms for pod "etcd-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.552741   29396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.552797   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/etcd-ha-238496-m02
	I0729 23:17:52.552808   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.552818   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.552826   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.558334   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:17:52.559013   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:52.559032   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.559042   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.559047   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.561908   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:17:52.562357   29396 pod_ready.go:92] pod "etcd-ha-238496-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:52.562377   29396 pod_ready.go:81] duration metric: took 9.630355ms for pod "etcd-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.562391   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.688744   29396 request.go:629] Waited for 126.281844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496
	I0729 23:17:52.688810   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496
	I0729 23:17:52.688817   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.688828   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.688833   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.692265   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:52.888364   29396 request.go:629] Waited for 195.376381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:52.888439   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:52.888445   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:52.888452   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:52.888457   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:52.891026   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:17:52.891567   29396 pod_ready.go:92] pod "kube-apiserver-ha-238496" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:52.891584   29396 pod_ready.go:81] duration metric: took 329.186355ms for pod "kube-apiserver-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:52.891593   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:53.087981   29396 request.go:629] Waited for 196.334679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496-m02
	I0729 23:17:53.088056   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496-m02
	I0729 23:17:53.088064   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:53.088072   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:53.088079   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:53.091485   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:53.288474   29396 request.go:629] Waited for 196.352331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:53.288520   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:53.288525   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:53.288532   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:53.288536   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:53.292168   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:53.292688   29396 pod_ready.go:92] pod "kube-apiserver-ha-238496-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:53.292709   29396 pod_ready.go:81] duration metric: took 401.108898ms for pod "kube-apiserver-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:53.292721   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:53.487802   29396 request.go:629] Waited for 195.017883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496
	I0729 23:17:53.487875   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496
	I0729 23:17:53.487880   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:53.487888   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:53.487896   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:53.491238   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:53.688436   29396 request.go:629] Waited for 196.401022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:53.688492   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:53.688509   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:53.688517   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:53.688521   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:53.691835   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:53.692301   29396 pod_ready.go:92] pod "kube-controller-manager-ha-238496" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:53.692325   29396 pod_ready.go:81] duration metric: took 399.593595ms for pod "kube-controller-manager-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:53.692339   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:53.888409   29396 request.go:629] Waited for 196.00174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496-m02
	I0729 23:17:53.888471   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496-m02
	I0729 23:17:53.888477   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:53.888484   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:53.888492   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:53.891692   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:54.088679   29396 request.go:629] Waited for 196.402331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:54.088739   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:54.088747   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:54.088758   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:54.088763   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:54.092410   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:54.093200   29396 pod_ready.go:92] pod "kube-controller-manager-ha-238496-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:54.093219   29396 pod_ready.go:81] duration metric: took 400.872004ms for pod "kube-controller-manager-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:54.093229   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m6vdn" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:54.288425   29396 request.go:629] Waited for 195.10707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m6vdn
	I0729 23:17:54.288476   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m6vdn
	I0729 23:17:54.288482   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:54.288490   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:54.288493   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:54.292185   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:54.488193   29396 request.go:629] Waited for 195.421802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:54.488248   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:54.488253   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:54.488260   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:54.488265   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:54.491550   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:54.492131   29396 pod_ready.go:92] pod "kube-proxy-m6vdn" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:54.492151   29396 pod_ready.go:81] duration metric: took 398.917263ms for pod "kube-proxy-m6vdn" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:54.492160   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrvw6" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:54.688359   29396 request.go:629] Waited for 196.138725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrvw6
	I0729 23:17:54.688434   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrvw6
	I0729 23:17:54.688440   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:54.688447   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:54.688451   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:54.691559   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:54.888419   29396 request.go:629] Waited for 196.328075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:54.888482   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:54.888487   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:54.888494   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:54.888499   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:54.891566   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:54.892146   29396 pod_ready.go:92] pod "kube-proxy-nrvw6" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:54.892169   29396 pod_ready.go:81] duration metric: took 400.003082ms for pod "kube-proxy-nrvw6" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:54.892179   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:55.088353   29396 request.go:629] Waited for 196.106382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496
	I0729 23:17:55.088408   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496
	I0729 23:17:55.088413   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:55.088423   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:55.088429   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:55.091946   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:55.287817   29396 request.go:629] Waited for 195.275329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:55.287881   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:17:55.287904   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:55.287913   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:55.287918   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:55.291966   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:17:55.292571   29396 pod_ready.go:92] pod "kube-scheduler-ha-238496" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:55.292591   29396 pod_ready.go:81] duration metric: took 400.405235ms for pod "kube-scheduler-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:55.292601   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:55.488732   29396 request.go:629] Waited for 196.075875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496-m02
	I0729 23:17:55.488803   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496-m02
	I0729 23:17:55.488824   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:55.488835   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:55.488840   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:55.492759   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:55.687724   29396 request.go:629] Waited for 194.327667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:55.687806   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:17:55.687815   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:55.687828   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:55.687837   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:55.691900   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:17:55.692304   29396 pod_ready.go:92] pod "kube-scheduler-ha-238496-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 23:17:55.692319   29396 pod_ready.go:81] duration metric: took 399.71201ms for pod "kube-scheduler-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:17:55.692329   29396 pod_ready.go:38] duration metric: took 3.200327096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 23:17:55.692348   29396 api_server.go:52] waiting for apiserver process to appear ...
	I0729 23:17:55.692407   29396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 23:17:55.707796   29396 api_server.go:72] duration metric: took 25.548134132s to wait for apiserver process to appear ...
	I0729 23:17:55.707815   29396 api_server.go:88] waiting for apiserver healthz status ...
	I0729 23:17:55.707828   29396 api_server.go:253] Checking apiserver healthz at https://192.168.39.113:8443/healthz ...
	I0729 23:17:55.714746   29396 api_server.go:279] https://192.168.39.113:8443/healthz returned 200:
	ok
	I0729 23:17:55.714807   29396 round_trippers.go:463] GET https://192.168.39.113:8443/version
	I0729 23:17:55.714813   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:55.714823   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:55.714831   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:55.716204   29396 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 23:17:55.716341   29396 api_server.go:141] control plane version: v1.30.3
	I0729 23:17:55.716364   29396 api_server.go:131] duration metric: took 8.542521ms to wait for apiserver health ...
	I0729 23:17:55.716374   29396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 23:17:55.888804   29396 request.go:629] Waited for 172.357533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:17:55.888877   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:17:55.888883   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:55.888891   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:55.888894   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:55.894267   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:17:55.898781   29396 system_pods.go:59] 17 kube-system pods found
	I0729 23:17:55.898806   29396 system_pods.go:61] "coredns-7db6d8ff4d-p8nps" [af3f5c7b-1996-497f-95f7-4bfc87392dc7] Running
	I0729 23:17:55.898811   29396 system_pods.go:61] "coredns-7db6d8ff4d-tjplq" [db7a6b8c-bfe3-4291-bf9a-9ce96bb5b0b7] Running
	I0729 23:17:55.898815   29396 system_pods.go:61] "etcd-ha-238496" [ed3a1237-a4c1-4e3f-b7d6-6b5237f7a18b] Running
	I0729 23:17:55.898819   29396 system_pods.go:61] "etcd-ha-238496-m02" [0a4d5ebc-a7be-445f-bdfc-47b3b1c01803] Running
	I0729 23:17:55.898822   29396 system_pods.go:61] "kindnet-55jmm" [7ddd1f82-1105-4694-b8d6-5198fdbd1f86] Running
	I0729 23:17:55.898827   29396 system_pods.go:61] "kindnet-xvzff" [400a9d4f-d218-443e-b001-edd5e5fd5af7] Running
	I0729 23:17:55.898830   29396 system_pods.go:61] "kube-apiserver-ha-238496" [54eebf95-2bd3-4c57-9794-170fccda1dbb] Running
	I0729 23:17:55.898834   29396 system_pods.go:61] "kube-apiserver-ha-238496-m02" [66429444-6c99-474c-9294-c569e1a5cc46] Running
	I0729 23:17:55.898838   29396 system_pods.go:61] "kube-controller-manager-ha-238496" [bb6bc2ad-54ec-42fa-8f18-e33cb50a8ce8] Running
	I0729 23:17:55.898842   29396 system_pods.go:61] "kube-controller-manager-ha-238496-m02" [8836c211-ee9d-403a-8383-333c22f1b945] Running
	I0729 23:17:55.898845   29396 system_pods.go:61] "kube-proxy-m6vdn" [f3731d91-d919-4f7f-a7b9-2bf7ba93569b] Running
	I0729 23:17:55.898848   29396 system_pods.go:61] "kube-proxy-nrvw6" [708cca57-5274-4ad9-871c-048f24b43a33] Running
	I0729 23:17:55.898851   29396 system_pods.go:61] "kube-scheduler-ha-238496" [b4999631-2ffc-4684-ab41-7e065cbbe74b] Running
	I0729 23:17:55.898857   29396 system_pods.go:61] "kube-scheduler-ha-238496-m02" [4eb7be71-6cad-4260-a4c0-6a97011e6ec5] Running
	I0729 23:17:55.898859   29396 system_pods.go:61] "kube-vip-ha-238496" [f248f380-c48b-451a-82e7-0aeb1e0ba6eb] Running
	I0729 23:17:55.898862   29396 system_pods.go:61] "kube-vip-ha-238496-m02" [39a50caf-f960-4d68-9235-d6dacace51c1] Running
	I0729 23:17:55.898865   29396 system_pods.go:61] "storage-provisioner" [2feba04d-7105-41cd-b308-747ed0079849] Running
	I0729 23:17:55.898871   29396 system_pods.go:74] duration metric: took 182.492062ms to wait for pod list to return data ...
	I0729 23:17:55.898879   29396 default_sa.go:34] waiting for default service account to be created ...
	I0729 23:17:56.088327   29396 request.go:629] Waited for 189.384723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/default/serviceaccounts
	I0729 23:17:56.088391   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/default/serviceaccounts
	I0729 23:17:56.088397   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:56.088405   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:56.088409   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:56.091993   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:56.092247   29396 default_sa.go:45] found service account: "default"
	I0729 23:17:56.092271   29396 default_sa.go:55] duration metric: took 193.384432ms for default service account to be created ...
	I0729 23:17:56.092281   29396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 23:17:56.288411   29396 request.go:629] Waited for 196.051331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:17:56.288503   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:17:56.288510   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:56.288519   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:56.288526   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:56.293596   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:17:56.298060   29396 system_pods.go:86] 17 kube-system pods found
	I0729 23:17:56.298088   29396 system_pods.go:89] "coredns-7db6d8ff4d-p8nps" [af3f5c7b-1996-497f-95f7-4bfc87392dc7] Running
	I0729 23:17:56.298095   29396 system_pods.go:89] "coredns-7db6d8ff4d-tjplq" [db7a6b8c-bfe3-4291-bf9a-9ce96bb5b0b7] Running
	I0729 23:17:56.298100   29396 system_pods.go:89] "etcd-ha-238496" [ed3a1237-a4c1-4e3f-b7d6-6b5237f7a18b] Running
	I0729 23:17:56.298104   29396 system_pods.go:89] "etcd-ha-238496-m02" [0a4d5ebc-a7be-445f-bdfc-47b3b1c01803] Running
	I0729 23:17:56.298110   29396 system_pods.go:89] "kindnet-55jmm" [7ddd1f82-1105-4694-b8d6-5198fdbd1f86] Running
	I0729 23:17:56.298114   29396 system_pods.go:89] "kindnet-xvzff" [400a9d4f-d218-443e-b001-edd5e5fd5af7] Running
	I0729 23:17:56.298120   29396 system_pods.go:89] "kube-apiserver-ha-238496" [54eebf95-2bd3-4c57-9794-170fccda1dbb] Running
	I0729 23:17:56.298126   29396 system_pods.go:89] "kube-apiserver-ha-238496-m02" [66429444-6c99-474c-9294-c569e1a5cc46] Running
	I0729 23:17:56.298132   29396 system_pods.go:89] "kube-controller-manager-ha-238496" [bb6bc2ad-54ec-42fa-8f18-e33cb50a8ce8] Running
	I0729 23:17:56.298140   29396 system_pods.go:89] "kube-controller-manager-ha-238496-m02" [8836c211-ee9d-403a-8383-333c22f1b945] Running
	I0729 23:17:56.298150   29396 system_pods.go:89] "kube-proxy-m6vdn" [f3731d91-d919-4f7f-a7b9-2bf7ba93569b] Running
	I0729 23:17:56.298156   29396 system_pods.go:89] "kube-proxy-nrvw6" [708cca57-5274-4ad9-871c-048f24b43a33] Running
	I0729 23:17:56.298162   29396 system_pods.go:89] "kube-scheduler-ha-238496" [b4999631-2ffc-4684-ab41-7e065cbbe74b] Running
	I0729 23:17:56.298172   29396 system_pods.go:89] "kube-scheduler-ha-238496-m02" [4eb7be71-6cad-4260-a4c0-6a97011e6ec5] Running
	I0729 23:17:56.298178   29396 system_pods.go:89] "kube-vip-ha-238496" [f248f380-c48b-451a-82e7-0aeb1e0ba6eb] Running
	I0729 23:17:56.298182   29396 system_pods.go:89] "kube-vip-ha-238496-m02" [39a50caf-f960-4d68-9235-d6dacace51c1] Running
	I0729 23:17:56.298186   29396 system_pods.go:89] "storage-provisioner" [2feba04d-7105-41cd-b308-747ed0079849] Running
	I0729 23:17:56.298193   29396 system_pods.go:126] duration metric: took 205.904881ms to wait for k8s-apps to be running ...
	I0729 23:17:56.298204   29396 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 23:17:56.298252   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:17:56.313300   29396 system_svc.go:56] duration metric: took 15.089221ms WaitForService to wait for kubelet
	I0729 23:17:56.313325   29396 kubeadm.go:582] duration metric: took 26.153665426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 23:17:56.313348   29396 node_conditions.go:102] verifying NodePressure condition ...
	I0729 23:17:56.487722   29396 request.go:629] Waited for 174.298793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes
	I0729 23:17:56.487799   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes
	I0729 23:17:56.487809   29396 round_trippers.go:469] Request Headers:
	I0729 23:17:56.487820   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:17:56.487828   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:17:56.491492   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:17:56.492513   29396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 23:17:56.492537   29396 node_conditions.go:123] node cpu capacity is 2
	I0729 23:17:56.492550   29396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 23:17:56.492555   29396 node_conditions.go:123] node cpu capacity is 2
	I0729 23:17:56.492561   29396 node_conditions.go:105] duration metric: took 179.206939ms to run NodePressure ...
	I0729 23:17:56.492573   29396 start.go:241] waiting for startup goroutines ...
	I0729 23:17:56.492604   29396 start.go:255] writing updated cluster config ...
	I0729 23:17:56.494762   29396 out.go:177] 
	I0729 23:17:56.496382   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:17:56.496476   29396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json ...
	I0729 23:17:56.498380   29396 out.go:177] * Starting "ha-238496-m03" control-plane node in "ha-238496" cluster
	I0729 23:17:56.499671   29396 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 23:17:56.499697   29396 cache.go:56] Caching tarball of preloaded images
	I0729 23:17:56.499810   29396 preload.go:172] Found /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 23:17:56.499827   29396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 23:17:56.499918   29396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json ...
	I0729 23:17:56.500102   29396 start.go:360] acquireMachinesLock for ha-238496-m03: {Name:mk79fbc287386032c39e512567e9786663e657a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 23:17:56.500164   29396 start.go:364] duration metric: took 35.548µs to acquireMachinesLock for "ha-238496-m03"
	I0729 23:17:56.500188   29396 start.go:93] Provisioning new machine with config: &{Name:ha-238496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 23:17:56.500322   29396 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 23:17:56.501844   29396 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 23:17:56.501937   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:17:56.501974   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:17:56.516619   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45937
	I0729 23:17:56.517059   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:17:56.517528   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:17:56.517548   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:17:56.517956   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:17:56.518148   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetMachineName
	I0729 23:17:56.518310   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:17:56.518469   29396 start.go:159] libmachine.API.Create for "ha-238496" (driver="kvm2")
	I0729 23:17:56.518499   29396 client.go:168] LocalClient.Create starting
	I0729 23:17:56.518534   29396 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem
	I0729 23:17:56.518573   29396 main.go:141] libmachine: Decoding PEM data...
	I0729 23:17:56.518591   29396 main.go:141] libmachine: Parsing certificate...
	I0729 23:17:56.518657   29396 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem
	I0729 23:17:56.518681   29396 main.go:141] libmachine: Decoding PEM data...
	I0729 23:17:56.518711   29396 main.go:141] libmachine: Parsing certificate...
	I0729 23:17:56.518737   29396 main.go:141] libmachine: Running pre-create checks...
	I0729 23:17:56.518748   29396 main.go:141] libmachine: (ha-238496-m03) Calling .PreCreateCheck
	I0729 23:17:56.518918   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetConfigRaw
	I0729 23:17:56.519351   29396 main.go:141] libmachine: Creating machine...
	I0729 23:17:56.519366   29396 main.go:141] libmachine: (ha-238496-m03) Calling .Create
	I0729 23:17:56.519523   29396 main.go:141] libmachine: (ha-238496-m03) Creating KVM machine...
	I0729 23:17:56.520838   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found existing default KVM network
	I0729 23:17:56.521062   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found existing private KVM network mk-ha-238496
	I0729 23:17:56.521175   29396 main.go:141] libmachine: (ha-238496-m03) Setting up store path in /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03 ...
	I0729 23:17:56.521192   29396 main.go:141] libmachine: (ha-238496-m03) Building disk image from file:///home/jenkins/minikube-integration/19347-12221/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 23:17:56.521250   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:56.521166   30320 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:17:56.521335   29396 main.go:141] libmachine: (ha-238496-m03) Downloading /home/jenkins/minikube-integration/19347-12221/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19347-12221/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 23:17:56.760034   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:56.759892   30320 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/id_rsa...
	I0729 23:17:56.853038   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:56.852885   30320 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/ha-238496-m03.rawdisk...
	I0729 23:17:56.853082   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Writing magic tar header
	I0729 23:17:56.853127   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Writing SSH key tar header
	I0729 23:17:56.853159   29396 main.go:141] libmachine: (ha-238496-m03) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03 (perms=drwx------)
	I0729 23:17:56.853175   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:56.852996   30320 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03 ...
	I0729 23:17:56.853200   29396 main.go:141] libmachine: (ha-238496-m03) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube/machines (perms=drwxr-xr-x)
	I0729 23:17:56.853225   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03
	I0729 23:17:56.853237   29396 main.go:141] libmachine: (ha-238496-m03) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221/.minikube (perms=drwxr-xr-x)
	I0729 23:17:56.853253   29396 main.go:141] libmachine: (ha-238496-m03) Setting executable bit set on /home/jenkins/minikube-integration/19347-12221 (perms=drwxrwxr-x)
	I0729 23:17:56.853291   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube/machines
	I0729 23:17:56.853302   29396 main.go:141] libmachine: (ha-238496-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 23:17:56.853313   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:17:56.853338   29396 main.go:141] libmachine: (ha-238496-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 23:17:56.853354   29396 main.go:141] libmachine: (ha-238496-m03) Creating domain...
	I0729 23:17:56.853368   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19347-12221
	I0729 23:17:56.853389   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 23:17:56.853401   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 23:17:56.853413   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Checking permissions on dir: /home
	I0729 23:17:56.853421   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Skipping /home - not owner
	I0729 23:17:56.854341   29396 main.go:141] libmachine: (ha-238496-m03) define libvirt domain using xml: 
	I0729 23:17:56.854361   29396 main.go:141] libmachine: (ha-238496-m03) <domain type='kvm'>
	I0729 23:17:56.854378   29396 main.go:141] libmachine: (ha-238496-m03)   <name>ha-238496-m03</name>
	I0729 23:17:56.854388   29396 main.go:141] libmachine: (ha-238496-m03)   <memory unit='MiB'>2200</memory>
	I0729 23:17:56.854399   29396 main.go:141] libmachine: (ha-238496-m03)   <vcpu>2</vcpu>
	I0729 23:17:56.854408   29396 main.go:141] libmachine: (ha-238496-m03)   <features>
	I0729 23:17:56.854419   29396 main.go:141] libmachine: (ha-238496-m03)     <acpi/>
	I0729 23:17:56.854429   29396 main.go:141] libmachine: (ha-238496-m03)     <apic/>
	I0729 23:17:56.854438   29396 main.go:141] libmachine: (ha-238496-m03)     <pae/>
	I0729 23:17:56.854449   29396 main.go:141] libmachine: (ha-238496-m03)     
	I0729 23:17:56.854459   29396 main.go:141] libmachine: (ha-238496-m03)   </features>
	I0729 23:17:56.854474   29396 main.go:141] libmachine: (ha-238496-m03)   <cpu mode='host-passthrough'>
	I0729 23:17:56.854483   29396 main.go:141] libmachine: (ha-238496-m03)   
	I0729 23:17:56.854489   29396 main.go:141] libmachine: (ha-238496-m03)   </cpu>
	I0729 23:17:56.854500   29396 main.go:141] libmachine: (ha-238496-m03)   <os>
	I0729 23:17:56.854510   29396 main.go:141] libmachine: (ha-238496-m03)     <type>hvm</type>
	I0729 23:17:56.854520   29396 main.go:141] libmachine: (ha-238496-m03)     <boot dev='cdrom'/>
	I0729 23:17:56.854534   29396 main.go:141] libmachine: (ha-238496-m03)     <boot dev='hd'/>
	I0729 23:17:56.854563   29396 main.go:141] libmachine: (ha-238496-m03)     <bootmenu enable='no'/>
	I0729 23:17:56.854583   29396 main.go:141] libmachine: (ha-238496-m03)   </os>
	I0729 23:17:56.854607   29396 main.go:141] libmachine: (ha-238496-m03)   <devices>
	I0729 23:17:56.854626   29396 main.go:141] libmachine: (ha-238496-m03)     <disk type='file' device='cdrom'>
	I0729 23:17:56.854667   29396 main.go:141] libmachine: (ha-238496-m03)       <source file='/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/boot2docker.iso'/>
	I0729 23:17:56.854682   29396 main.go:141] libmachine: (ha-238496-m03)       <target dev='hdc' bus='scsi'/>
	I0729 23:17:56.854710   29396 main.go:141] libmachine: (ha-238496-m03)       <readonly/>
	I0729 23:17:56.854725   29396 main.go:141] libmachine: (ha-238496-m03)     </disk>
	I0729 23:17:56.854737   29396 main.go:141] libmachine: (ha-238496-m03)     <disk type='file' device='disk'>
	I0729 23:17:56.854747   29396 main.go:141] libmachine: (ha-238496-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 23:17:56.854759   29396 main.go:141] libmachine: (ha-238496-m03)       <source file='/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/ha-238496-m03.rawdisk'/>
	I0729 23:17:56.854766   29396 main.go:141] libmachine: (ha-238496-m03)       <target dev='hda' bus='virtio'/>
	I0729 23:17:56.854772   29396 main.go:141] libmachine: (ha-238496-m03)     </disk>
	I0729 23:17:56.854781   29396 main.go:141] libmachine: (ha-238496-m03)     <interface type='network'>
	I0729 23:17:56.854801   29396 main.go:141] libmachine: (ha-238496-m03)       <source network='mk-ha-238496'/>
	I0729 23:17:56.854815   29396 main.go:141] libmachine: (ha-238496-m03)       <model type='virtio'/>
	I0729 23:17:56.854824   29396 main.go:141] libmachine: (ha-238496-m03)     </interface>
	I0729 23:17:56.854830   29396 main.go:141] libmachine: (ha-238496-m03)     <interface type='network'>
	I0729 23:17:56.854838   29396 main.go:141] libmachine: (ha-238496-m03)       <source network='default'/>
	I0729 23:17:56.854845   29396 main.go:141] libmachine: (ha-238496-m03)       <model type='virtio'/>
	I0729 23:17:56.854850   29396 main.go:141] libmachine: (ha-238496-m03)     </interface>
	I0729 23:17:56.854860   29396 main.go:141] libmachine: (ha-238496-m03)     <serial type='pty'>
	I0729 23:17:56.854869   29396 main.go:141] libmachine: (ha-238496-m03)       <target port='0'/>
	I0729 23:17:56.854876   29396 main.go:141] libmachine: (ha-238496-m03)     </serial>
	I0729 23:17:56.854881   29396 main.go:141] libmachine: (ha-238496-m03)     <console type='pty'>
	I0729 23:17:56.854888   29396 main.go:141] libmachine: (ha-238496-m03)       <target type='serial' port='0'/>
	I0729 23:17:56.854903   29396 main.go:141] libmachine: (ha-238496-m03)     </console>
	I0729 23:17:56.854919   29396 main.go:141] libmachine: (ha-238496-m03)     <rng model='virtio'>
	I0729 23:17:56.854934   29396 main.go:141] libmachine: (ha-238496-m03)       <backend model='random'>/dev/random</backend>
	I0729 23:17:56.854943   29396 main.go:141] libmachine: (ha-238496-m03)     </rng>
	I0729 23:17:56.854951   29396 main.go:141] libmachine: (ha-238496-m03)     
	I0729 23:17:56.854960   29396 main.go:141] libmachine: (ha-238496-m03)     
	I0729 23:17:56.854969   29396 main.go:141] libmachine: (ha-238496-m03)   </devices>
	I0729 23:17:56.854995   29396 main.go:141] libmachine: (ha-238496-m03) </domain>
	I0729 23:17:56.855005   29396 main.go:141] libmachine: (ha-238496-m03) 
	I0729 23:17:56.862233   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:cd:ef:f8 in network default
	I0729 23:17:56.862925   29396 main.go:141] libmachine: (ha-238496-m03) Ensuring networks are active...
	I0729 23:17:56.862951   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:17:56.863726   29396 main.go:141] libmachine: (ha-238496-m03) Ensuring network default is active
	I0729 23:17:56.864012   29396 main.go:141] libmachine: (ha-238496-m03) Ensuring network mk-ha-238496 is active
	I0729 23:17:56.864366   29396 main.go:141] libmachine: (ha-238496-m03) Getting domain xml...
	I0729 23:17:56.865033   29396 main.go:141] libmachine: (ha-238496-m03) Creating domain...
	I0729 23:17:58.116970   29396 main.go:141] libmachine: (ha-238496-m03) Waiting to get IP...
	I0729 23:17:58.117886   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:17:58.118260   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:17:58.118300   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:58.118258   30320 retry.go:31] will retry after 268.360451ms: waiting for machine to come up
	I0729 23:17:58.388875   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:17:58.389411   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:17:58.389435   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:58.389365   30320 retry.go:31] will retry after 348.144746ms: waiting for machine to come up
	I0729 23:17:58.738773   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:17:58.739331   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:17:58.739360   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:58.739272   30320 retry.go:31] will retry after 389.25045ms: waiting for machine to come up
	I0729 23:17:59.129833   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:17:59.130375   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:17:59.130401   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:59.130330   30320 retry.go:31] will retry after 474.496502ms: waiting for machine to come up
	I0729 23:17:59.605919   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:17:59.606486   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:17:59.606509   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:17:59.606424   30320 retry.go:31] will retry after 613.279938ms: waiting for machine to come up
	I0729 23:18:00.221389   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:00.221827   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:18:00.221851   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:18:00.221772   30320 retry.go:31] will retry after 600.582506ms: waiting for machine to come up
	I0729 23:18:00.823549   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:00.823945   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:18:00.823968   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:18:00.823898   30320 retry.go:31] will retry after 923.091946ms: waiting for machine to come up
	I0729 23:18:01.748465   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:01.748887   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:18:01.748914   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:18:01.748870   30320 retry.go:31] will retry after 1.165300062s: waiting for machine to come up
	I0729 23:18:02.915681   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:02.916182   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:18:02.916208   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:18:02.916141   30320 retry.go:31] will retry after 1.444012725s: waiting for machine to come up
	I0729 23:18:04.361249   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:04.361757   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:18:04.361779   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:18:04.361726   30320 retry.go:31] will retry after 2.185830021s: waiting for machine to come up
	I0729 23:18:06.548999   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:06.549480   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:18:06.549506   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:18:06.549438   30320 retry.go:31] will retry after 2.601246738s: waiting for machine to come up
	I0729 23:18:09.154097   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:09.154517   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:18:09.154538   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:18:09.154477   30320 retry.go:31] will retry after 3.18346416s: waiting for machine to come up
	I0729 23:18:12.339329   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:12.339790   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find current IP address of domain ha-238496-m03 in network mk-ha-238496
	I0729 23:18:12.339810   29396 main.go:141] libmachine: (ha-238496-m03) DBG | I0729 23:18:12.339749   30320 retry.go:31] will retry after 4.409983716s: waiting for machine to come up
	I0729 23:18:16.750726   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:16.751193   29396 main.go:141] libmachine: (ha-238496-m03) Found IP for machine: 192.168.39.149
	I0729 23:18:16.751226   29396 main.go:141] libmachine: (ha-238496-m03) Reserving static IP address...
	I0729 23:18:16.751240   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has current primary IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:16.751590   29396 main.go:141] libmachine: (ha-238496-m03) DBG | unable to find host DHCP lease matching {name: "ha-238496-m03", mac: "52:54:00:34:73:00", ip: "192.168.39.149"} in network mk-ha-238496
	I0729 23:18:16.821654   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Getting to WaitForSSH function...
	I0729 23:18:16.821681   29396 main.go:141] libmachine: (ha-238496-m03) Reserved static IP address: 192.168.39.149
	I0729 23:18:16.821693   29396 main.go:141] libmachine: (ha-238496-m03) Waiting for SSH to be available...
	I0729 23:18:16.824621   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:16.825058   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:34:73:00}
	I0729 23:18:16.825096   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:16.825253   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Using SSH client type: external
	I0729 23:18:16.825282   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/id_rsa (-rw-------)
	I0729 23:18:16.825312   29396 main.go:141] libmachine: (ha-238496-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 23:18:16.825323   29396 main.go:141] libmachine: (ha-238496-m03) DBG | About to run SSH command:
	I0729 23:18:16.825338   29396 main.go:141] libmachine: (ha-238496-m03) DBG | exit 0
	I0729 23:18:16.946616   29396 main.go:141] libmachine: (ha-238496-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 23:18:16.946922   29396 main.go:141] libmachine: (ha-238496-m03) KVM machine creation complete!
	I0729 23:18:16.947196   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetConfigRaw
	I0729 23:18:16.947801   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:18:16.947974   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:18:16.948102   29396 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 23:18:16.948118   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetState
	I0729 23:18:16.949513   29396 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 23:18:16.949532   29396 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 23:18:16.949537   29396 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 23:18:16.949543   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:16.951794   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:16.952180   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:16.952208   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:16.952339   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:16.952505   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:16.952651   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:16.952805   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:16.952961   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:18:16.953205   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0729 23:18:16.953221   29396 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 23:18:17.054089   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 23:18:17.054114   29396 main.go:141] libmachine: Detecting the provisioner...
	I0729 23:18:17.054126   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:17.057286   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.057884   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.057914   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.058180   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:17.058466   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.058723   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.058900   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:17.059130   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:18:17.059333   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0729 23:18:17.059349   29396 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 23:18:17.167547   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 23:18:17.167618   29396 main.go:141] libmachine: found compatible host: buildroot
	I0729 23:18:17.167642   29396 main.go:141] libmachine: Provisioning with buildroot...
	I0729 23:18:17.167653   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetMachineName
	I0729 23:18:17.167880   29396 buildroot.go:166] provisioning hostname "ha-238496-m03"
	I0729 23:18:17.167907   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetMachineName
	I0729 23:18:17.168122   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:17.170867   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.171312   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.171345   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.171499   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:17.171697   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.171868   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.172033   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:17.172229   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:18:17.172456   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0729 23:18:17.172482   29396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-238496-m03 && echo "ha-238496-m03" | sudo tee /etc/hostname
	I0729 23:18:17.290998   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-238496-m03
	
	I0729 23:18:17.291025   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:17.294658   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.295379   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.295410   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.295626   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:17.295831   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.296004   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.296193   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:17.296419   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:18:17.296659   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0729 23:18:17.296681   29396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-238496-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-238496-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-238496-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 23:18:17.412566   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 23:18:17.412594   29396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19347-12221/.minikube CaCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19347-12221/.minikube}
	I0729 23:18:17.412611   29396 buildroot.go:174] setting up certificates
	I0729 23:18:17.412623   29396 provision.go:84] configureAuth start
	I0729 23:18:17.412631   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetMachineName
	I0729 23:18:17.412886   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetIP
	I0729 23:18:17.415634   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.415982   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.416009   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.416139   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:17.418291   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.418718   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.418745   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.418887   29396 provision.go:143] copyHostCerts
	I0729 23:18:17.418920   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem
	I0729 23:18:17.418960   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem, removing ...
	I0729 23:18:17.418971   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem
	I0729 23:18:17.419054   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/ca.pem (1078 bytes)
	I0729 23:18:17.419146   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem
	I0729 23:18:17.419169   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem, removing ...
	I0729 23:18:17.419178   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem
	I0729 23:18:17.419221   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/cert.pem (1123 bytes)
	I0729 23:18:17.419284   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem
	I0729 23:18:17.419307   29396 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem, removing ...
	I0729 23:18:17.419316   29396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem
	I0729 23:18:17.419353   29396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19347-12221/.minikube/key.pem (1675 bytes)
	I0729 23:18:17.419477   29396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem org=jenkins.ha-238496-m03 san=[127.0.0.1 192.168.39.149 ha-238496-m03 localhost minikube]
	I0729 23:18:17.499973   29396 provision.go:177] copyRemoteCerts
	I0729 23:18:17.500084   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 23:18:17.500119   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:17.502839   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.503240   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.503280   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.503394   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:17.503579   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.503727   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:17.503848   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/id_rsa Username:docker}
	I0729 23:18:17.585149   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 23:18:17.585226   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 23:18:17.611065   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 23:18:17.611132   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 23:18:17.636605   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 23:18:17.636684   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 23:18:17.663333   29396 provision.go:87] duration metric: took 250.697302ms to configureAuth
	I0729 23:18:17.663364   29396 buildroot.go:189] setting minikube options for container-runtime
	I0729 23:18:17.663658   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:18:17.663690   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:18:17.664032   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:17.666831   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.667201   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.667236   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.667362   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:17.667568   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.667730   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.667864   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:17.668006   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:18:17.668186   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0729 23:18:17.668197   29396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 23:18:17.772931   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 23:18:17.772958   29396 buildroot.go:70] root file system type: tmpfs
	I0729 23:18:17.773073   29396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 23:18:17.773094   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:17.775780   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.776131   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.776168   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.776334   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:17.776561   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.776761   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.776903   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:17.777084   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:18:17.777237   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0729 23:18:17.777296   29396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.113"
	Environment="NO_PROXY=192.168.39.113,192.168.39.226"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 23:18:17.894853   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.113
	Environment=NO_PROXY=192.168.39.113,192.168.39.226
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 23:18:17.894900   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:17.897632   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.898057   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:17.898091   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:17.898268   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:17.898549   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.898768   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:17.898941   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:17.899117   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:18:17.899270   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0729 23:18:17.899286   29396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 23:18:19.710533   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 23:18:19.710562   29396 main.go:141] libmachine: Checking connection to Docker...
	I0729 23:18:19.710570   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetURL
	I0729 23:18:19.712009   29396 main.go:141] libmachine: (ha-238496-m03) DBG | Using libvirt version 6000000
	I0729 23:18:19.714322   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.714682   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:19.714714   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.714913   29396 main.go:141] libmachine: Docker is up and running!
	I0729 23:18:19.714930   29396 main.go:141] libmachine: Reticulating splines...
	I0729 23:18:19.714938   29396 client.go:171] duration metric: took 23.196430062s to LocalClient.Create
	I0729 23:18:19.714965   29396 start.go:167] duration metric: took 23.196496052s to libmachine.API.Create "ha-238496"
	I0729 23:18:19.714977   29396 start.go:293] postStartSetup for "ha-238496-m03" (driver="kvm2")
	I0729 23:18:19.714992   29396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 23:18:19.715025   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:18:19.715294   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 23:18:19.715321   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:19.717345   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.717724   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:19.717752   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.717859   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:19.718034   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:19.718204   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:19.718352   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/id_rsa Username:docker}
	I0729 23:18:19.801578   29396 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 23:18:19.806033   29396 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 23:18:19.806058   29396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-12221/.minikube/addons for local assets ...
	I0729 23:18:19.806117   29396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-12221/.minikube/files for local assets ...
	I0729 23:18:19.806183   29396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> 194112.pem in /etc/ssl/certs
	I0729 23:18:19.806192   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> /etc/ssl/certs/194112.pem
	I0729 23:18:19.806266   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 23:18:19.815398   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem --> /etc/ssl/certs/194112.pem (1708 bytes)
	I0729 23:18:19.841583   29396 start.go:296] duration metric: took 126.592964ms for postStartSetup
	I0729 23:18:19.841628   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetConfigRaw
	I0729 23:18:19.842233   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetIP
	I0729 23:18:19.844912   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.845270   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:19.845300   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.845620   29396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/config.json ...
	I0729 23:18:19.845824   29396 start.go:128] duration metric: took 23.345491779s to createHost
	I0729 23:18:19.845846   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:19.848061   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.848372   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:19.848398   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.848564   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:19.848749   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:19.848909   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:19.849065   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:19.849191   29396 main.go:141] libmachine: Using SSH client type: native
	I0729 23:18:19.849338   29396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0729 23:18:19.849348   29396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 23:18:19.951990   29396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722295099.917890654
	
	I0729 23:18:19.952014   29396 fix.go:216] guest clock: 1722295099.917890654
	I0729 23:18:19.952023   29396 fix.go:229] Guest: 2024-07-29 23:18:19.917890654 +0000 UTC Remote: 2024-07-29 23:18:19.84583587 +0000 UTC m=+183.867783749 (delta=72.054784ms)
	I0729 23:18:19.952043   29396 fix.go:200] guest clock delta is within tolerance: 72.054784ms
	I0729 23:18:19.952048   29396 start.go:83] releasing machines lock for "ha-238496-m03", held for 23.451873031s
	I0729 23:18:19.952066   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:18:19.952316   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetIP
	I0729 23:18:19.955220   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.955681   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:19.955705   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.958083   29396 out.go:177] * Found network options:
	I0729 23:18:19.959273   29396 out.go:177]   - NO_PROXY=192.168.39.113,192.168.39.226
	W0729 23:18:19.960362   29396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 23:18:19.960382   29396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 23:18:19.960394   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:18:19.960914   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:18:19.961090   29396 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:18:19.961186   29396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 23:18:19.961222   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	W0729 23:18:19.961278   29396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 23:18:19.961299   29396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 23:18:19.961355   29396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 23:18:19.961370   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:18:19.964054   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.964314   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.964345   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:19.964367   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.964513   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:19.964682   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:19.964779   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:19.964808   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:19.964835   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:19.964951   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:18:19.965021   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/id_rsa Username:docker}
	I0729 23:18:19.965091   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:18:19.965232   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:18:19.965377   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/id_rsa Username:docker}
	W0729 23:18:20.069265   29396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 23:18:20.069346   29396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 23:18:20.089100   29396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 23:18:20.089127   29396 start.go:495] detecting cgroup driver to use...
	I0729 23:18:20.089226   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 23:18:20.109423   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0729 23:18:20.121185   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 23:18:20.131667   29396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 23:18:20.131729   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 23:18:20.142107   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 23:18:20.152727   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 23:18:20.162722   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 23:18:20.173052   29396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 23:18:20.183164   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 23:18:20.193942   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 23:18:20.204204   29396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 23:18:20.216044   29396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 23:18:20.225885   29396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 23:18:20.235268   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:18:20.354306   29396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 23:18:20.386442   29396 start.go:495] detecting cgroup driver to use...
	I0729 23:18:20.386526   29396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 23:18:20.403972   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 23:18:20.436001   29396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 23:18:20.467406   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 23:18:20.484807   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 23:18:20.506289   29396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 23:18:20.539471   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 23:18:20.553836   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 23:18:20.573444   29396 ssh_runner.go:195] Run: which cri-dockerd
	I0729 23:18:20.578089   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 23:18:20.588175   29396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 23:18:20.607230   29396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 23:18:20.725875   29396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 23:18:20.855232   29396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 23:18:20.855280   29396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 23:18:20.874090   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:18:20.994222   29396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 23:18:23.368531   29396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.374269914s)
	I0729 23:18:23.368592   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 23:18:23.383584   29396 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 23:18:23.403170   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 23:18:23.418207   29396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 23:18:23.541217   29396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 23:18:23.670819   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:18:23.810131   29396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 23:18:23.827814   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 23:18:23.842063   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:18:23.956037   29396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 23:18:24.041700   29396 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 23:18:24.041767   29396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 23:18:24.048237   29396 start.go:563] Will wait 60s for crictl version
	I0729 23:18:24.048285   29396 ssh_runner.go:195] Run: which crictl
	I0729 23:18:24.053205   29396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 23:18:24.096189   29396 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0729 23:18:24.096276   29396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 23:18:24.127515   29396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 23:18:24.155063   29396 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0729 23:18:24.156415   29396 out.go:177]   - env NO_PROXY=192.168.39.113
	I0729 23:18:24.157636   29396 out.go:177]   - env NO_PROXY=192.168.39.113,192.168.39.226
	I0729 23:18:24.158734   29396 main.go:141] libmachine: (ha-238496-m03) Calling .GetIP
	I0729 23:18:24.161448   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:24.161874   29396 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:18:24.161903   29396 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:18:24.162093   29396 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 23:18:24.166564   29396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 23:18:24.180873   29396 mustload.go:65] Loading cluster: ha-238496
	I0729 23:18:24.181150   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:18:24.181509   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:18:24.181550   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:18:24.196403   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0729 23:18:24.196811   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:18:24.197247   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:18:24.197268   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:18:24.197582   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:18:24.197801   29396 main.go:141] libmachine: (ha-238496) Calling .GetState
	I0729 23:18:24.199552   29396 host.go:66] Checking if "ha-238496" exists ...
	I0729 23:18:24.199959   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:18:24.200004   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:18:24.215836   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38527
	I0729 23:18:24.216217   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:18:24.216659   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:18:24.216681   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:18:24.217013   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:18:24.217183   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:18:24.217358   29396 certs.go:68] Setting up /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496 for IP: 192.168.39.149
	I0729 23:18:24.217378   29396 certs.go:194] generating shared ca certs ...
	I0729 23:18:24.217391   29396 certs.go:226] acquiring lock for ca certs: {Name:mk651b4a346cb6b65a98f292d471b5ea2ee1b039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:18:24.217531   29396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key
	I0729 23:18:24.217572   29396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key
	I0729 23:18:24.217581   29396 certs.go:256] generating profile certs ...
	I0729 23:18:24.217646   29396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.key
	I0729 23:18:24.217675   29396 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.020f40f1
	I0729 23:18:24.217695   29396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.020f40f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.113 192.168.39.226 192.168.39.149 192.168.39.254]
	I0729 23:18:24.480368   29396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.020f40f1 ...
	I0729 23:18:24.480396   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.020f40f1: {Name:mka76b94d67b685d0074ff48e14df385dc4b115d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:18:24.480554   29396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.020f40f1 ...
	I0729 23:18:24.480564   29396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.020f40f1: {Name:mka13cda52a163fc14ec9c650a674d5e5ee78192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:18:24.480632   29396 certs.go:381] copying /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt.020f40f1 -> /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt
	I0729 23:18:24.480761   29396 certs.go:385] copying /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key.020f40f1 -> /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key
	I0729 23:18:24.480882   29396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key
	I0729 23:18:24.480896   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 23:18:24.480909   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 23:18:24.480922   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 23:18:24.480934   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 23:18:24.480950   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 23:18:24.480962   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 23:18:24.480973   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 23:18:24.480987   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 23:18:24.481070   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem (1338 bytes)
	W0729 23:18:24.481110   29396 certs.go:480] ignoring /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411_empty.pem, impossibly tiny 0 bytes
	I0729 23:18:24.481122   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 23:18:24.481155   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/ca.pem (1078 bytes)
	I0729 23:18:24.481184   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/cert.pem (1123 bytes)
	I0729 23:18:24.481218   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/key.pem (1675 bytes)
	I0729 23:18:24.481273   29396 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem (1708 bytes)
	I0729 23:18:24.481309   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem -> /usr/share/ca-certificates/194112.pem
	I0729 23:18:24.481328   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:18:24.481343   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem -> /usr/share/ca-certificates/19411.pem
	I0729 23:18:24.481373   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:18:24.484304   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:18:24.484716   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:18:24.484740   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:18:24.484903   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:18:24.485204   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:18:24.485367   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:18:24.485521   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:18:24.563047   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 23:18:24.572327   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 23:18:24.585774   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 23:18:24.591059   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 23:18:24.603264   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 23:18:24.608189   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 23:18:24.621107   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 23:18:24.625692   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 23:18:24.637355   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 23:18:24.643721   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 23:18:24.661989   29396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 23:18:24.668318   29396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 23:18:24.678812   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 23:18:24.708772   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 23:18:24.734095   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 23:18:24.761797   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 23:18:24.791993   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 23:18:24.818897   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 23:18:24.843320   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 23:18:24.868133   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 23:18:24.894007   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/ssl/certs/194112.pem --> /usr/share/ca-certificates/194112.pem (1708 bytes)
	I0729 23:18:24.919386   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 23:18:24.944533   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/certs/19411.pem --> /usr/share/ca-certificates/19411.pem (1338 bytes)
	I0729 23:18:24.969936   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 23:18:24.988206   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 23:18:25.005201   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 23:18:25.024627   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 23:18:25.043229   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 23:18:25.063048   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 23:18:25.082802   29396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 23:18:25.102348   29396 ssh_runner.go:195] Run: openssl version
	I0729 23:18:25.109028   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/194112.pem && ln -fs /usr/share/ca-certificates/194112.pem /etc/ssl/certs/194112.pem"
	I0729 23:18:25.121686   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/194112.pem
	I0729 23:18:25.126649   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:11 /usr/share/ca-certificates/194112.pem
	I0729 23:18:25.126728   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/194112.pem
	I0729 23:18:25.132950   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/194112.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 23:18:25.145264   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 23:18:25.156705   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:18:25.161408   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:18:25.161468   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 23:18:25.167328   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 23:18:25.178351   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19411.pem && ln -fs /usr/share/ca-certificates/19411.pem /etc/ssl/certs/19411.pem"
	I0729 23:18:25.190126   29396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19411.pem
	I0729 23:18:25.194860   29396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:11 /usr/share/ca-certificates/19411.pem
	I0729 23:18:25.194913   29396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19411.pem
	I0729 23:18:25.200760   29396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19411.pem /etc/ssl/certs/51391683.0"
	I0729 23:18:25.212002   29396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 23:18:25.216439   29396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 23:18:25.216486   29396 kubeadm.go:934] updating node {m03 192.168.39.149 8443 v1.30.3 docker true true} ...
	I0729 23:18:25.216561   29396 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-238496-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 23:18:25.216583   29396 kube-vip.go:115] generating kube-vip config ...
	I0729 23:18:25.216614   29396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 23:18:25.232406   29396 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 23:18:25.232476   29396 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 23:18:25.232536   29396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 23:18:25.242069   29396 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 23:18:25.242137   29396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 23:18:25.251750   29396 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 23:18:25.251771   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 23:18:25.251797   29396 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 23:18:25.251829   29396 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 23:18:25.251839   29396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 23:18:25.251840   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:18:25.251844   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 23:18:25.251900   29396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 23:18:25.257314   29396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 23:18:25.257333   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 23:18:25.279779   29396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 23:18:25.279809   29396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 23:18:25.279816   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 23:18:25.279914   29396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 23:18:25.327896   29396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 23:18:25.327938   29396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 23:18:26.135622   29396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 23:18:26.145637   29396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0729 23:18:26.163912   29396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 23:18:26.181593   29396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 23:18:26.199220   29396 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 23:18:26.203822   29396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 23:18:26.217025   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:18:26.342359   29396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 23:18:26.368665   29396 host.go:66] Checking if "ha-238496" exists ...
	I0729 23:18:26.369015   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:18:26.369067   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:18:26.386170   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0729 23:18:26.386689   29396 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:18:26.387240   29396 main.go:141] libmachine: Using API Version  1
	I0729 23:18:26.387261   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:18:26.387672   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:18:26.387888   29396 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:18:26.388031   29396 start.go:317] joinCluster: &{Name:ha-238496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-238496 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:18:26.388163   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 23:18:26.388187   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:18:26.391336   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:18:26.391895   29396 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:18:26.391918   29396 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:18:26.392077   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:18:26.392263   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:18:26.392415   29396 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:18:26.392560   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:18:26.581813   29396 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 23:18:26.581869   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0ywfv2.rkmgx63wf72zwqqk --discovery-token-ca-cert-hash sha256:da4124175dbd4d7966590c68bf3c2627d9fda969ee89096732ee7fd4a463dd4a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-238496-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443"
	I0729 23:18:51.475405   29396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0ywfv2.rkmgx63wf72zwqqk --discovery-token-ca-cert-hash sha256:da4124175dbd4d7966590c68bf3c2627d9fda969ee89096732ee7fd4a463dd4a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-238496-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443": (24.893510323s)
	I0729 23:18:51.475451   29396 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 23:18:52.014804   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-238496-m03 minikube.k8s.io/updated_at=2024_07_29T23_18_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3 minikube.k8s.io/name=ha-238496 minikube.k8s.io/primary=false
	I0729 23:18:52.155300   29396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-238496-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 23:18:52.304779   29396 start.go:319] duration metric: took 25.916745009s to joinCluster
	I0729 23:18:52.304850   29396 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 23:18:52.305212   29396 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:18:52.306430   29396 out.go:177] * Verifying Kubernetes components...
	I0729 23:18:52.307833   29396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 23:18:52.570958   29396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 23:18:52.598459   29396 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:18:52.598826   29396 kapi.go:59] client config for ha-238496: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.crt", KeyFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/profiles/ha-238496/client.key", CAFile:"/home/jenkins/minikube-integration/19347-12221/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 23:18:52.598909   29396 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.113:8443
	I0729 23:18:52.599174   29396 node_ready.go:35] waiting up to 6m0s for node "ha-238496-m03" to be "Ready" ...
	I0729 23:18:52.599248   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:52.599255   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:52.599266   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:52.599273   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:52.602438   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:53.099412   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:53.099460   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:53.099472   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:53.099478   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:53.103215   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:53.600157   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:53.600181   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:53.600190   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:53.600197   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:53.605227   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:18:54.099698   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:54.099721   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:54.099730   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:54.099734   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:54.103051   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:54.600168   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:54.600193   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:54.600204   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:54.600210   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:54.604181   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:54.604814   29396 node_ready.go:53] node "ha-238496-m03" has status "Ready":"False"
	I0729 23:18:55.099901   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:55.099941   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:55.099951   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:55.099956   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:55.103175   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:55.599949   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:55.599969   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:55.599977   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:55.599981   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:55.604676   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:18:56.099926   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:56.099949   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:56.099957   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:56.099962   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:56.103784   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:56.599928   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:56.599955   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:56.599964   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:56.599969   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:56.603571   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:57.099427   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:57.099449   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:57.099456   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:57.099460   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:57.103316   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:57.103854   29396 node_ready.go:53] node "ha-238496-m03" has status "Ready":"False"
	I0729 23:18:57.599973   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:57.599999   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:57.600010   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:57.600016   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:57.603583   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:58.099476   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:58.099500   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:58.099510   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:58.099514   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:58.103276   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:58.600166   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:58.600188   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:58.600196   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:58.600201   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:58.604175   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:18:59.100227   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:59.100262   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:59.100271   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:59.100275   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:59.104347   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:18:59.106420   29396 node_ready.go:53] node "ha-238496-m03" has status "Ready":"False"
	I0729 23:18:59.600223   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:18:59.600244   29396 round_trippers.go:469] Request Headers:
	I0729 23:18:59.600252   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:18:59.600257   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:18:59.603629   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:00.100154   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:00.100174   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:00.100182   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:00.100185   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:00.103877   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:00.599346   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:00.599368   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:00.599374   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:00.599378   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:00.603739   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:19:01.100139   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:01.100165   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:01.100178   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:01.100183   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:01.103691   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:01.599585   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:01.599606   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:01.599614   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:01.599618   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:01.603036   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:01.603692   29396 node_ready.go:53] node "ha-238496-m03" has status "Ready":"False"
	I0729 23:19:02.099930   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:02.099953   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:02.099961   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:02.099964   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:02.103792   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:02.600386   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:02.600405   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:02.600414   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:02.600423   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:02.604228   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:03.099945   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:03.099969   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:03.099979   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:03.099987   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:03.103141   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:03.599963   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:03.599984   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:03.599991   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:03.599997   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:03.603708   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:03.604435   29396 node_ready.go:53] node "ha-238496-m03" has status "Ready":"False"
	I0729 23:19:04.099425   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:04.099446   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:04.099462   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:04.099467   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:04.102745   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:04.600170   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:04.600195   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:04.600205   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:04.600210   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:04.604071   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:05.099965   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:05.099992   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:05.100004   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:05.100010   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:05.103443   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:05.599703   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:05.599733   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:05.599741   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:05.599745   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:05.602758   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:06.100313   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:06.100339   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:06.100349   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:06.100356   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:06.104074   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:06.104560   29396 node_ready.go:53] node "ha-238496-m03" has status "Ready":"False"
	I0729 23:19:06.599776   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:06.599799   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:06.599807   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:06.599811   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:06.603694   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:07.100397   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:07.100425   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:07.100436   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:07.100443   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:07.109921   29396 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 23:19:07.599360   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:07.599382   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:07.599390   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:07.599394   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:07.603023   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:08.099904   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:08.099925   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:08.099932   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:08.099936   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:08.103187   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:08.599540   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:08.599562   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:08.599570   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:08.599576   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:08.602493   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:08.603114   29396 node_ready.go:53] node "ha-238496-m03" has status "Ready":"False"
	I0729 23:19:09.099505   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:09.099526   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.099533   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.099536   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.217759   29396 round_trippers.go:574] Response Status: 200 OK in 118 milliseconds
	I0729 23:19:09.599507   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:09.599529   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.599536   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.599540   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.603233   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:09.603757   29396 node_ready.go:49] node "ha-238496-m03" has status "Ready":"True"
	I0729 23:19:09.603776   29396 node_ready.go:38] duration metric: took 17.004589012s for node "ha-238496-m03" to be "Ready" ...
	I0729 23:19:09.603784   29396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 23:19:09.603836   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:19:09.603846   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.603855   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.603863   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.611049   29396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 23:19:09.617888   29396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p8nps" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.617981   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p8nps
	I0729 23:19:09.617989   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.617997   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.618001   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.621617   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:09.622359   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:09.622375   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.622383   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.622388   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.625417   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:09.626016   29396 pod_ready.go:92] pod "coredns-7db6d8ff4d-p8nps" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:09.626037   29396 pod_ready.go:81] duration metric: took 8.124272ms for pod "coredns-7db6d8ff4d-p8nps" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.626046   29396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tjplq" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.626117   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tjplq
	I0729 23:19:09.626125   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.626139   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.626146   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.628706   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:09.629438   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:09.629451   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.629464   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.629468   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.632161   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:09.632598   29396 pod_ready.go:92] pod "coredns-7db6d8ff4d-tjplq" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:09.632614   29396 pod_ready.go:81] duration metric: took 6.559167ms for pod "coredns-7db6d8ff4d-tjplq" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.632622   29396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.632663   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/etcd-ha-238496
	I0729 23:19:09.632671   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.632677   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.632683   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.635207   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:09.635748   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:09.635761   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.635769   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.635774   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.638168   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:09.638558   29396 pod_ready.go:92] pod "etcd-ha-238496" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:09.638574   29396 pod_ready.go:81] duration metric: took 5.946799ms for pod "etcd-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.638585   29396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.638634   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/etcd-ha-238496-m02
	I0729 23:19:09.638641   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.638648   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.638653   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.641255   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:09.641916   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:09.641934   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.641944   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.641952   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.644523   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:09.644949   29396 pod_ready.go:92] pod "etcd-ha-238496-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:09.644964   29396 pod_ready.go:81] duration metric: took 6.369287ms for pod "etcd-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.644973   29396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-238496-m03" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:09.800329   29396 request.go:629] Waited for 155.300695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/etcd-ha-238496-m03
	I0729 23:19:09.800403   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/etcd-ha-238496-m03
	I0729 23:19:09.800414   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.800423   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.800447   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:09.803688   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:09.999868   29396 request.go:629] Waited for 195.356437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:09.999935   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:09.999940   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:09.999947   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:09.999951   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:10.003518   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:10.004201   29396 pod_ready.go:92] pod "etcd-ha-238496-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:10.004220   29396 pod_ready.go:81] duration metric: took 359.241717ms for pod "etcd-ha-238496-m03" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:10.004248   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:10.200290   29396 request.go:629] Waited for 195.933294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496
	I0729 23:19:10.200365   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496
	I0729 23:19:10.200371   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:10.200379   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:10.200384   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:10.208712   29396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 23:19:10.399884   29396 request.go:629] Waited for 190.181377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:10.399943   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:10.399950   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:10.399960   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:10.399966   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:10.403257   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:10.403958   29396 pod_ready.go:92] pod "kube-apiserver-ha-238496" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:10.403980   29396 pod_ready.go:81] duration metric: took 399.717532ms for pod "kube-apiserver-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:10.403994   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:10.600125   29396 request.go:629] Waited for 196.04647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496-m02
	I0729 23:19:10.600187   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496-m02
	I0729 23:19:10.600192   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:10.600201   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:10.600206   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:10.603755   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:10.799857   29396 request.go:629] Waited for 195.304515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:10.799951   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:10.799960   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:10.799968   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:10.799974   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:10.804771   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:19:10.805297   29396 pod_ready.go:92] pod "kube-apiserver-ha-238496-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:10.805316   29396 pod_ready.go:81] duration metric: took 401.313343ms for pod "kube-apiserver-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:10.805328   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-238496-m03" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:11.000424   29396 request.go:629] Waited for 195.030165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496-m03
	I0729 23:19:11.000489   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-238496-m03
	I0729 23:19:11.000513   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:11.000523   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:11.000526   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:11.003767   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:11.200256   29396 request.go:629] Waited for 195.566068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:11.200305   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:11.200310   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:11.200317   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:11.200321   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:11.207986   29396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 23:19:11.208915   29396 pod_ready.go:92] pod "kube-apiserver-ha-238496-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:11.208933   29396 pod_ready.go:81] duration metric: took 403.597736ms for pod "kube-apiserver-ha-238496-m03" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:11.208943   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:11.399869   29396 request.go:629] Waited for 190.871481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496
	I0729 23:19:11.399925   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496
	I0729 23:19:11.399932   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:11.399942   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:11.399950   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:11.403414   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:11.600424   29396 request.go:629] Waited for 196.336161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:11.600474   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:11.600480   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:11.600490   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:11.600500   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:11.603696   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:11.604331   29396 pod_ready.go:92] pod "kube-controller-manager-ha-238496" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:11.604348   29396 pod_ready.go:81] duration metric: took 395.398637ms for pod "kube-controller-manager-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:11.604357   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:11.799625   29396 request.go:629] Waited for 194.825123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496-m02
	I0729 23:19:11.799685   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496-m02
	I0729 23:19:11.799690   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:11.799697   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:11.799702   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:11.805192   29396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 23:19:12.000327   29396 request.go:629] Waited for 194.372047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:12.000404   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:12.000412   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:12.000420   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:12.000429   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:12.003948   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:12.004664   29396 pod_ready.go:92] pod "kube-controller-manager-ha-238496-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:12.004682   29396 pod_ready.go:81] duration metric: took 400.318127ms for pod "kube-controller-manager-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:12.004692   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-238496-m03" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:12.199772   29396 request.go:629] Waited for 195.023091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496-m03
	I0729 23:19:12.199839   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-238496-m03
	I0729 23:19:12.199845   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:12.199853   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:12.199861   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:12.208978   29396 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 23:19:12.399970   29396 request.go:629] Waited for 188.017146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:12.400027   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:12.400034   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:12.400044   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:12.400049   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:12.403826   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:12.404648   29396 pod_ready.go:92] pod "kube-controller-manager-ha-238496-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:12.404664   29396 pod_ready.go:81] duration metric: took 399.96617ms for pod "kube-controller-manager-ha-238496-m03" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:12.404675   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-84q2j" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:12.599837   29396 request.go:629] Waited for 195.088918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84q2j
	I0729 23:19:12.599902   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84q2j
	I0729 23:19:12.599908   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:12.599916   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:12.599922   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:12.603764   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:12.799903   29396 request.go:629] Waited for 195.280664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:12.799988   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:12.799999   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:12.800010   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:12.800017   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:12.804112   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:19:12.804755   29396 pod_ready.go:92] pod "kube-proxy-84q2j" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:12.804771   29396 pod_ready.go:81] duration metric: took 400.090247ms for pod "kube-proxy-84q2j" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:12.804784   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m6vdn" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:12.999973   29396 request.go:629] Waited for 195.096525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m6vdn
	I0729 23:19:13.000025   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m6vdn
	I0729 23:19:13.000030   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:13.000038   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:13.000043   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:13.004206   29396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 23:19:13.200315   29396 request.go:629] Waited for 195.344861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:13.200387   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:13.200394   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:13.200403   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:13.200407   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:13.203669   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:13.204391   29396 pod_ready.go:92] pod "kube-proxy-m6vdn" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:13.204408   29396 pod_ready.go:81] duration metric: took 399.614059ms for pod "kube-proxy-m6vdn" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:13.204418   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrvw6" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:13.399561   29396 request.go:629] Waited for 195.057489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrvw6
	I0729 23:19:13.399619   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrvw6
	I0729 23:19:13.399625   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:13.399633   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:13.399641   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:13.403311   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:13.600350   29396 request.go:629] Waited for 196.355182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:13.600420   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:13.600426   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:13.600433   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:13.600438   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:13.603539   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:13.604101   29396 pod_ready.go:92] pod "kube-proxy-nrvw6" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:13.604130   29396 pod_ready.go:81] duration metric: took 399.697487ms for pod "kube-proxy-nrvw6" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:13.604154   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:13.800215   29396 request.go:629] Waited for 196.003393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496
	I0729 23:19:13.800264   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496
	I0729 23:19:13.800269   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:13.800276   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:13.800282   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:13.803643   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:14.000583   29396 request.go:629] Waited for 196.347088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:14.000640   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496
	I0729 23:19:14.000646   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:14.000656   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:14.000662   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:14.003455   29396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 23:19:14.004025   29396 pod_ready.go:92] pod "kube-scheduler-ha-238496" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:14.004043   29396 pod_ready.go:81] duration metric: took 399.883328ms for pod "kube-scheduler-ha-238496" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:14.004051   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:14.200236   29396 request.go:629] Waited for 196.127033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496-m02
	I0729 23:19:14.200346   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496-m02
	I0729 23:19:14.200366   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:14.200377   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:14.200384   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:14.210764   29396 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0729 23:19:14.399682   29396 request.go:629] Waited for 188.173442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:14.399751   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m02
	I0729 23:19:14.399756   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:14.399764   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:14.399771   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:14.403486   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:14.404167   29396 pod_ready.go:92] pod "kube-scheduler-ha-238496-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:14.404189   29396 pod_ready.go:81] duration metric: took 400.130141ms for pod "kube-scheduler-ha-238496-m02" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:14.404202   29396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-238496-m03" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:14.599520   29396 request.go:629] Waited for 195.261081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496-m03
	I0729 23:19:14.599572   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-238496-m03
	I0729 23:19:14.599577   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:14.599584   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:14.599592   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:14.602991   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:14.800140   29396 request.go:629] Waited for 196.358765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:14.800203   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes/ha-238496-m03
	I0729 23:19:14.800215   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:14.800226   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:14.800232   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:14.803666   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:14.804301   29396 pod_ready.go:92] pod "kube-scheduler-ha-238496-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 23:19:14.804318   29396 pod_ready.go:81] duration metric: took 400.109552ms for pod "kube-scheduler-ha-238496-m03" in "kube-system" namespace to be "Ready" ...
	I0729 23:19:14.804328   29396 pod_ready.go:38] duration metric: took 5.200535213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 23:19:14.804344   29396 api_server.go:52] waiting for apiserver process to appear ...
	I0729 23:19:14.804391   29396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 23:19:14.822172   29396 api_server.go:72] duration metric: took 22.517280061s to wait for apiserver process to appear ...
	I0729 23:19:14.822199   29396 api_server.go:88] waiting for apiserver healthz status ...
	I0729 23:19:14.822218   29396 api_server.go:253] Checking apiserver healthz at https://192.168.39.113:8443/healthz ...
	I0729 23:19:14.826236   29396 api_server.go:279] https://192.168.39.113:8443/healthz returned 200:
	ok
	I0729 23:19:14.826287   29396 round_trippers.go:463] GET https://192.168.39.113:8443/version
	I0729 23:19:14.826292   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:14.826299   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:14.826304   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:14.827085   29396 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 23:19:14.827244   29396 api_server.go:141] control plane version: v1.30.3
	I0729 23:19:14.827263   29396 api_server.go:131] duration metric: took 5.057172ms to wait for apiserver health ...
	I0729 23:19:14.827272   29396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 23:19:14.999578   29396 request.go:629] Waited for 172.242999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:19:14.999628   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:19:14.999655   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:14.999665   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:14.999671   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:15.006220   29396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 23:19:15.012395   29396 system_pods.go:59] 24 kube-system pods found
	I0729 23:19:15.012427   29396 system_pods.go:61] "coredns-7db6d8ff4d-p8nps" [af3f5c7b-1996-497f-95f7-4bfc87392dc7] Running
	I0729 23:19:15.012434   29396 system_pods.go:61] "coredns-7db6d8ff4d-tjplq" [db7a6b8c-bfe3-4291-bf9a-9ce96bb5b0b7] Running
	I0729 23:19:15.012439   29396 system_pods.go:61] "etcd-ha-238496" [ed3a1237-a4c1-4e3f-b7d6-6b5237f7a18b] Running
	I0729 23:19:15.012443   29396 system_pods.go:61] "etcd-ha-238496-m02" [0a4d5ebc-a7be-445f-bdfc-47b3b1c01803] Running
	I0729 23:19:15.012448   29396 system_pods.go:61] "etcd-ha-238496-m03" [8cc4bf64-609d-4cf2-b8f6-e0f660e4428c] Running
	I0729 23:19:15.012452   29396 system_pods.go:61] "kindnet-55jmm" [7ddd1f82-1105-4694-b8d6-5198fdbd1f86] Running
	I0729 23:19:15.012459   29396 system_pods.go:61] "kindnet-kb2hw" [ef875a41-530f-48ba-b034-d08a8a7acbbc] Running
	I0729 23:19:15.012464   29396 system_pods.go:61] "kindnet-xvzff" [400a9d4f-d218-443e-b001-edd5e5fd5af7] Running
	I0729 23:19:15.012470   29396 system_pods.go:61] "kube-apiserver-ha-238496" [54eebf95-2bd3-4c57-9794-170fccda1dbb] Running
	I0729 23:19:15.012475   29396 system_pods.go:61] "kube-apiserver-ha-238496-m02" [66429444-6c99-474c-9294-c569e1a5cc46] Running
	I0729 23:19:15.012483   29396 system_pods.go:61] "kube-apiserver-ha-238496-m03" [fe9eddc6-6bb4-4f78-891d-e5830247246f] Running
	I0729 23:19:15.012490   29396 system_pods.go:61] "kube-controller-manager-ha-238496" [bb6bc2ad-54ec-42fa-8f18-e33cb50a8ce8] Running
	I0729 23:19:15.012498   29396 system_pods.go:61] "kube-controller-manager-ha-238496-m02" [8836c211-ee9d-403a-8383-333c22f1b945] Running
	I0729 23:19:15.012503   29396 system_pods.go:61] "kube-controller-manager-ha-238496-m03" [8e748a5e-733a-4be7-896f-0501c2d63ab9] Running
	I0729 23:19:15.012508   29396 system_pods.go:61] "kube-proxy-84q2j" [4a6fb431-510a-4ecb-a8d3-e595512e0e52] Running
	I0729 23:19:15.012514   29396 system_pods.go:61] "kube-proxy-m6vdn" [f3731d91-d919-4f7f-a7b9-2bf7ba93569b] Running
	I0729 23:19:15.012521   29396 system_pods.go:61] "kube-proxy-nrvw6" [708cca57-5274-4ad9-871c-048f24b43a33] Running
	I0729 23:19:15.012525   29396 system_pods.go:61] "kube-scheduler-ha-238496" [b4999631-2ffc-4684-ab41-7e065cbbe74b] Running
	I0729 23:19:15.012531   29396 system_pods.go:61] "kube-scheduler-ha-238496-m02" [4eb7be71-6cad-4260-a4c0-6a97011e6ec5] Running
	I0729 23:19:15.012536   29396 system_pods.go:61] "kube-scheduler-ha-238496-m03" [bc034dc2-6055-4edd-90f2-7c80b18c5842] Running
	I0729 23:19:15.012541   29396 system_pods.go:61] "kube-vip-ha-238496" [f248f380-c48b-451a-82e7-0aeb1e0ba6eb] Running
	I0729 23:19:15.012547   29396 system_pods.go:61] "kube-vip-ha-238496-m02" [39a50caf-f960-4d68-9235-d6dacace51c1] Running
	I0729 23:19:15.012553   29396 system_pods.go:61] "kube-vip-ha-238496-m03" [317c63a1-f4c1-44b7-8f97-3a5c01f9a64e] Running
	I0729 23:19:15.012558   29396 system_pods.go:61] "storage-provisioner" [2feba04d-7105-41cd-b308-747ed0079849] Running
	I0729 23:19:15.012565   29396 system_pods.go:74] duration metric: took 185.282872ms to wait for pod list to return data ...
	I0729 23:19:15.012577   29396 default_sa.go:34] waiting for default service account to be created ...
	I0729 23:19:15.199847   29396 request.go:629] Waited for 187.195853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/default/serviceaccounts
	I0729 23:19:15.199896   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/default/serviceaccounts
	I0729 23:19:15.199901   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:15.199908   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:15.199914   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:15.203366   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:15.203475   29396 default_sa.go:45] found service account: "default"
	I0729 23:19:15.203489   29396 default_sa.go:55] duration metric: took 190.906753ms for default service account to be created ...
	I0729 23:19:15.203497   29396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 23:19:15.399849   29396 request.go:629] Waited for 196.29656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:19:15.399915   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/namespaces/kube-system/pods
	I0729 23:19:15.399920   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:15.399928   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:15.399933   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:15.406727   29396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 23:19:15.413271   29396 system_pods.go:86] 24 kube-system pods found
	I0729 23:19:15.413300   29396 system_pods.go:89] "coredns-7db6d8ff4d-p8nps" [af3f5c7b-1996-497f-95f7-4bfc87392dc7] Running
	I0729 23:19:15.413306   29396 system_pods.go:89] "coredns-7db6d8ff4d-tjplq" [db7a6b8c-bfe3-4291-bf9a-9ce96bb5b0b7] Running
	I0729 23:19:15.413310   29396 system_pods.go:89] "etcd-ha-238496" [ed3a1237-a4c1-4e3f-b7d6-6b5237f7a18b] Running
	I0729 23:19:15.413314   29396 system_pods.go:89] "etcd-ha-238496-m02" [0a4d5ebc-a7be-445f-bdfc-47b3b1c01803] Running
	I0729 23:19:15.413319   29396 system_pods.go:89] "etcd-ha-238496-m03" [8cc4bf64-609d-4cf2-b8f6-e0f660e4428c] Running
	I0729 23:19:15.413322   29396 system_pods.go:89] "kindnet-55jmm" [7ddd1f82-1105-4694-b8d6-5198fdbd1f86] Running
	I0729 23:19:15.413326   29396 system_pods.go:89] "kindnet-kb2hw" [ef875a41-530f-48ba-b034-d08a8a7acbbc] Running
	I0729 23:19:15.413330   29396 system_pods.go:89] "kindnet-xvzff" [400a9d4f-d218-443e-b001-edd5e5fd5af7] Running
	I0729 23:19:15.413334   29396 system_pods.go:89] "kube-apiserver-ha-238496" [54eebf95-2bd3-4c57-9794-170fccda1dbb] Running
	I0729 23:19:15.413338   29396 system_pods.go:89] "kube-apiserver-ha-238496-m02" [66429444-6c99-474c-9294-c569e1a5cc46] Running
	I0729 23:19:15.413343   29396 system_pods.go:89] "kube-apiserver-ha-238496-m03" [fe9eddc6-6bb4-4f78-891d-e5830247246f] Running
	I0729 23:19:15.413347   29396 system_pods.go:89] "kube-controller-manager-ha-238496" [bb6bc2ad-54ec-42fa-8f18-e33cb50a8ce8] Running
	I0729 23:19:15.413351   29396 system_pods.go:89] "kube-controller-manager-ha-238496-m02" [8836c211-ee9d-403a-8383-333c22f1b945] Running
	I0729 23:19:15.413356   29396 system_pods.go:89] "kube-controller-manager-ha-238496-m03" [8e748a5e-733a-4be7-896f-0501c2d63ab9] Running
	I0729 23:19:15.413360   29396 system_pods.go:89] "kube-proxy-84q2j" [4a6fb431-510a-4ecb-a8d3-e595512e0e52] Running
	I0729 23:19:15.413363   29396 system_pods.go:89] "kube-proxy-m6vdn" [f3731d91-d919-4f7f-a7b9-2bf7ba93569b] Running
	I0729 23:19:15.413370   29396 system_pods.go:89] "kube-proxy-nrvw6" [708cca57-5274-4ad9-871c-048f24b43a33] Running
	I0729 23:19:15.413375   29396 system_pods.go:89] "kube-scheduler-ha-238496" [b4999631-2ffc-4684-ab41-7e065cbbe74b] Running
	I0729 23:19:15.413383   29396 system_pods.go:89] "kube-scheduler-ha-238496-m02" [4eb7be71-6cad-4260-a4c0-6a97011e6ec5] Running
	I0729 23:19:15.413387   29396 system_pods.go:89] "kube-scheduler-ha-238496-m03" [bc034dc2-6055-4edd-90f2-7c80b18c5842] Running
	I0729 23:19:15.413394   29396 system_pods.go:89] "kube-vip-ha-238496" [f248f380-c48b-451a-82e7-0aeb1e0ba6eb] Running
	I0729 23:19:15.413397   29396 system_pods.go:89] "kube-vip-ha-238496-m02" [39a50caf-f960-4d68-9235-d6dacace51c1] Running
	I0729 23:19:15.413401   29396 system_pods.go:89] "kube-vip-ha-238496-m03" [317c63a1-f4c1-44b7-8f97-3a5c01f9a64e] Running
	I0729 23:19:15.413404   29396 system_pods.go:89] "storage-provisioner" [2feba04d-7105-41cd-b308-747ed0079849] Running
	I0729 23:19:15.413412   29396 system_pods.go:126] duration metric: took 209.91061ms to wait for k8s-apps to be running ...
	I0729 23:19:15.413422   29396 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 23:19:15.413463   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:19:15.431449   29396 system_svc.go:56] duration metric: took 18.020128ms WaitForService to wait for kubelet
	I0729 23:19:15.431477   29396 kubeadm.go:582] duration metric: took 23.126590826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 23:19:15.431495   29396 node_conditions.go:102] verifying NodePressure condition ...
	I0729 23:19:15.599855   29396 request.go:629] Waited for 168.298521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.113:8443/api/v1/nodes
	I0729 23:19:15.599919   29396 round_trippers.go:463] GET https://192.168.39.113:8443/api/v1/nodes
	I0729 23:19:15.599925   29396 round_trippers.go:469] Request Headers:
	I0729 23:19:15.599932   29396 round_trippers.go:473]     Accept: application/json, */*
	I0729 23:19:15.599939   29396 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 23:19:15.603551   29396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 23:19:15.604409   29396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 23:19:15.604432   29396 node_conditions.go:123] node cpu capacity is 2
	I0729 23:19:15.604444   29396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 23:19:15.604454   29396 node_conditions.go:123] node cpu capacity is 2
	I0729 23:19:15.604459   29396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 23:19:15.604464   29396 node_conditions.go:123] node cpu capacity is 2
	I0729 23:19:15.604470   29396 node_conditions.go:105] duration metric: took 172.970183ms to run NodePressure ...
	I0729 23:19:15.604485   29396 start.go:241] waiting for startup goroutines ...
	I0729 23:19:15.604505   29396 start.go:255] writing updated cluster config ...
	I0729 23:19:15.604809   29396 ssh_runner.go:195] Run: rm -f paused
	I0729 23:19:15.655539   29396 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 23:19:15.657726   29396 out.go:177] * Done! kubectl is now configured to use "ha-238496" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 29 23:16:37 ha-238496 cri-dockerd[1092]: time="2024-07-29T23:16:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/582b2c7baceb501e49656618efcbf0f27619a18145c3ff398c3b789cc9bfdf95/resolv.conf as [nameserver 192.168.122.1]"
	Jul 29 23:16:37 ha-238496 cri-dockerd[1092]: time="2024-07-29T23:16:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014bae0f7e88a3f1ec0a6a6132b4f98aa48560f60c128e668b46efd5c355bd7/resolv.conf as [nameserver 192.168.122.1]"
	Jul 29 23:16:37 ha-238496 cri-dockerd[1092]: time="2024-07-29T23:16:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9397d0ccb3a1e588ab8cda3b68fb14860b17dc14f00ccc03418139af4c83cc8/resolv.conf as [nameserver 192.168.122.1]"
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.484476445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.485248709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.485357241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.485728790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.638588001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.639074699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.639295310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.639593280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.656417401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.656485114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.656499490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:16:37 ha-238496 dockerd[1202]: time="2024-07-29T23:16:37.656574401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:19:18 ha-238496 dockerd[1202]: time="2024-07-29T23:19:18.936226883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:19:18 ha-238496 dockerd[1202]: time="2024-07-29T23:19:18.936852960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:19:18 ha-238496 dockerd[1202]: time="2024-07-29T23:19:18.936964837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:19:18 ha-238496 dockerd[1202]: time="2024-07-29T23:19:18.937632906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:19:19 ha-238496 cri-dockerd[1092]: time="2024-07-29T23:19:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/aa1e1b89fad706a2cc592ca475ad4636a8b6fff855df129a27b09aaece54827a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 23:19:21 ha-238496 cri-dockerd[1092]: time="2024-07-29T23:19:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 29 23:19:21 ha-238496 dockerd[1202]: time="2024-07-29T23:19:21.398061329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:19:21 ha-238496 dockerd[1202]: time="2024-07-29T23:19:21.398641089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:19:21 ha-238496 dockerd[1202]: time="2024-07-29T23:19:21.398740310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:19:21 ha-238496 dockerd[1202]: time="2024-07-29T23:19:21.399226781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b8a213421914f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   33 seconds ago      Running             busybox                   0                   aa1e1b89fad70       busybox-fc5497c4f-ftt4w
	dd28444131756       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   0                   a9397d0ccb3a1       coredns-7db6d8ff4d-tjplq
	29db34bf17cce       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   0                   a014bae0f7e88       coredns-7db6d8ff4d-p8nps
	a338297585908       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       0                   582b2c7baceb5       storage-provisioner
	906ec8f4a7599       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              3 minutes ago       Running             kindnet-cni               0                   4f259ecefd5fb       kindnet-55jmm
	2bb8727b4690e       55bb025d2cfa5                                                                                         3 minutes ago       Running             kube-proxy                0                   cb40e8b38f063       kube-proxy-nrvw6
	4fec030f63653       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     3 minutes ago       Running             kube-vip                  0                   9e857d29c684c       kube-vip-ha-238496
	2127946fcd8b4       76932a3b37d7e                                                                                         3 minutes ago       Running             kube-controller-manager   0                   73f194ce0a0ec       kube-controller-manager-ha-238496
	dc0f824c8c08d       3edc18e7b7672                                                                                         3 minutes ago       Running             kube-scheduler            0                   675aa145ad460       kube-scheduler-ha-238496
	189714a08644c       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      0                   d32ae036bb1ae       etcd-ha-238496
	4607f65fdc744       1f6d574d502f3                                                                                         3 minutes ago       Running             kube-apiserver            0                   a1b03a6b48501       kube-apiserver-ha-238496
	
	
	==> coredns [29db34bf17cc] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33135 - 55754 "HINFO IN 7267661357375732516.6129737729494210875. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018889362s
	[INFO] 10.244.2.3:51780 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000748186s
	[INFO] 10.244.2.3:59083 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.004127652s
	[INFO] 10.244.2.3:36153 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.0195213s
	[INFO] 10.244.1.2:33877 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000197751s
	[INFO] 10.244.2.3:52428 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000313348s
	[INFO] 10.244.2.3:54129 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000283487s
	[INFO] 10.244.2.3:46649 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209103s
	[INFO] 10.244.0.4:32930 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002157659s
	[INFO] 10.244.0.4:38959 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129551s
	[INFO] 10.244.0.4:55194 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128726s
	[INFO] 10.244.0.4:34700 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001779956s
	[INFO] 10.244.1.2:34614 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001969757s
	[INFO] 10.244.1.2:46578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086621s
	[INFO] 10.244.1.2:38697 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067185s
	[INFO] 10.244.1.2:40745 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079935s
	[INFO] 10.244.1.2:54339 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152584s
	[INFO] 10.244.2.3:35899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140543s
	[INFO] 10.244.2.3:55438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179287s
	[INFO] 10.244.0.4:54155 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194153s
	[INFO] 10.244.1.2:38738 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120454s
	[INFO] 10.244.1.2:35786 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177128s
	[INFO] 10.244.1.2:48640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103188s
	
	
	==> coredns [dd2844413175] <==
	[INFO] 10.244.0.4:50889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219623s
	[INFO] 10.244.0.4:50147 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000114376s
	[INFO] 10.244.0.4:48383 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000465063s
	[INFO] 10.244.0.4:57201 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001758859s
	[INFO] 10.244.1.2:34940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148699s
	[INFO] 10.244.1.2:52125 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000150404s
	[INFO] 10.244.1.2:33924 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001932601s
	[INFO] 10.244.2.3:51276 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139359s
	[INFO] 10.244.2.3:50554 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01541771s
	[INFO] 10.244.2.3:59531 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003229091s
	[INFO] 10.244.2.3:37331 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183708s
	[INFO] 10.244.2.3:60486 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000184885s
	[INFO] 10.244.0.4:56715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098738s
	[INFO] 10.244.0.4:58597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114041s
	[INFO] 10.244.0.4:44865 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000052207s
	[INFO] 10.244.0.4:45588 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005185s
	[INFO] 10.244.1.2:58043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118248s
	[INFO] 10.244.1.2:42031 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001467571s
	[INFO] 10.244.1.2:41248 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011067s
	[INFO] 10.244.2.3:38627 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149238s
	[INFO] 10.244.2.3:41829 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147486s
	[INFO] 10.244.0.4:57864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122317s
	[INFO] 10.244.0.4:41410 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171073s
	[INFO] 10.244.0.4:40270 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071428s
	[INFO] 10.244.1.2:53470 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130306s
	
	
	==> describe nodes <==
	Name:               ha-238496
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-238496
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3
	                    minikube.k8s.io/name=ha-238496
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T23_16_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 23:16:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-238496
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 23:19:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 23:19:40 +0000   Mon, 29 Jul 2024 23:16:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 23:19:40 +0000   Mon, 29 Jul 2024 23:16:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 23:19:40 +0000   Mon, 29 Jul 2024 23:16:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 23:19:40 +0000   Mon, 29 Jul 2024 23:16:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    ha-238496
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e97d1c0e8fa74f90a32b064d1e8b3b0d
	  System UUID:                e97d1c0e-8fa7-4f90-a32b-064d1e8b3b0d
	  Boot ID:                    0c7e5178-e51f-4168-a6f6-c6311d7885ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ftt4w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-7db6d8ff4d-p8nps             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m35s
	  kube-system                 coredns-7db6d8ff4d-tjplq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m35s
	  kube-system                 etcd-ha-238496                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m48s
	  kube-system                 kindnet-55jmm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m35s
	  kube-system                 kube-apiserver-ha-238496             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-controller-manager-ha-238496    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-proxy-nrvw6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-scheduler-ha-238496             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-vip-ha-238496                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m34s  kube-proxy       
	  Normal  Starting                 3m48s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s  kubelet          Node ha-238496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s  kubelet          Node ha-238496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s  kubelet          Node ha-238496 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m36s  node-controller  Node ha-238496 event: Registered Node ha-238496 in Controller
	  Normal  NodeReady                3m18s  kubelet          Node ha-238496 status is now: NodeReady
	  Normal  RegisteredNode           2m9s   node-controller  Node ha-238496 event: Registered Node ha-238496 in Controller
	  Normal  RegisteredNode           48s    node-controller  Node ha-238496 event: Registered Node ha-238496 in Controller
	
	
	Name:               ha-238496-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-238496-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3
	                    minikube.k8s.io/name=ha-238496
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T23_17_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 23:17:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-238496-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 23:19:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 23:19:29 +0000   Mon, 29 Jul 2024 23:17:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 23:19:29 +0000   Mon, 29 Jul 2024 23:17:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 23:19:29 +0000   Mon, 29 Jul 2024 23:17:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 23:19:29 +0000   Mon, 29 Jul 2024 23:17:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    ha-238496-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fc7dd8b885b4325887ebc09b43b1482
	  System UUID:                7fc7dd8b-885b-4325-887e-bc09b43b1482
	  Boot ID:                    f7e261b6-597f-4c06-90dc-cfd2aeb63c93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-scl6h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 etcd-ha-238496-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m26s
	  kube-system                 kindnet-xvzff                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m28s
	  kube-system                 kube-apiserver-ha-238496-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-controller-manager-ha-238496-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-proxy-m6vdn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-scheduler-ha-238496-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-vip-ha-238496-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m28s (x8 over 2m28s)  kubelet          Node ha-238496-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m28s (x8 over 2m28s)  kubelet          Node ha-238496-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m28s (x7 over 2m28s)  kubelet          Node ha-238496-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m26s                  node-controller  Node ha-238496-m02 event: Registered Node ha-238496-m02 in Controller
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-238496-m02 event: Registered Node ha-238496-m02 in Controller
	  Normal  RegisteredNode           48s                    node-controller  Node ha-238496-m02 event: Registered Node ha-238496-m02 in Controller
	
	
	Name:               ha-238496-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-238496-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3
	                    minikube.k8s.io/name=ha-238496
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T23_18_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 23:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-238496-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 23:19:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 23:19:48 +0000   Mon, 29 Jul 2024 23:18:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 23:19:48 +0000   Mon, 29 Jul 2024 23:18:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 23:19:48 +0000   Mon, 29 Jul 2024 23:18:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 23:19:48 +0000   Mon, 29 Jul 2024 23:19:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-238496-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 614dd3033977494daebb32e9c448932b
	  System UUID:                614dd303-3977-494d-aebb-32e9c448932b
	  Boot ID:                    7e41f2a0-0988-42a6-8f82-c1af035674ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d42qb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 etcd-ha-238496-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-kb2hw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      67s
	  kube-system                 kube-apiserver-ha-238496-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-controller-manager-ha-238496-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-proxy-84q2j                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-ha-238496-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-vip-ha-238496-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  67s (x8 over 68s)  kubelet          Node ha-238496-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 68s)  kubelet          Node ha-238496-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x7 over 68s)  kubelet          Node ha-238496-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           66s                node-controller  Node ha-238496-m03 event: Registered Node ha-238496-m03 in Controller
	  Normal  RegisteredNode           64s                node-controller  Node ha-238496-m03 event: Registered Node ha-238496-m03 in Controller
	  Normal  RegisteredNode           48s                node-controller  Node ha-238496-m03 event: Registered Node ha-238496-m03 in Controller
	
	
	==> dmesg <==
	[  +4.617993] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.922249] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
	[  +0.062788] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060074] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +2.094393] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.305877] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.124394] systemd-fstab-generator[816]: Ignoring "noauto" option for root device
	[  +0.135649] systemd-fstab-generator[830]: Ignoring "noauto" option for root device
	[  +2.281421] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.210257] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.120089] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.128936] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	[  +0.145315] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +3.616653] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +2.741843] kauditd_printk_skb: 150 callbacks suppressed
	[  +0.527955] systemd-fstab-generator[1442]: Ignoring "noauto" option for root device
	[  +5.026301] systemd-fstab-generator[1627]: Ignoring "noauto" option for root device
	[  +0.062925] kauditd_printk_skb: 54 callbacks suppressed
	[Jul29 23:16] systemd-fstab-generator[2119]: Ignoring "noauto" option for root device
	[  +0.104825] kauditd_printk_skb: 81 callbacks suppressed
	[ +13.770767] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.274468] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 23:17] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [189714a08644] <==
	{"level":"info","ts":"2024-07-29T23:18:47.227497Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:47.228109Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:47.229494Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01","remote-peer-urls":["https://192.168.39.149:2380"]}
	{"level":"info","ts":"2024-07-29T23:18:47.229328Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:47.229398Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:47.229409Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:47.229443Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"warn","ts":"2024-07-29T23:18:47.326321Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"17cb303f1eeb5f01","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-29T23:18:48.316118Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"17cb303f1eeb5f01","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-29T23:18:49.316606Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"17cb303f1eeb5f01","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-29T23:18:49.476459Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:49.497913Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:49.499017Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:49.51338Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"8069059f79d446ff","to":"17cb303f1eeb5f01","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T23:18:49.513524Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"info","ts":"2024-07-29T23:18:49.51521Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"8069059f79d446ff","to":"17cb303f1eeb5f01","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T23:18:49.515325Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"8069059f79d446ff","remote-peer-id":"17cb303f1eeb5f01"}
	{"level":"warn","ts":"2024-07-29T23:18:50.315769Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"17cb303f1eeb5f01","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-29T23:18:51.318391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff switched to configuration voters=(1714517130804420353 4026652155228842115 9252933091911288575)"}
	{"level":"info","ts":"2024-07-29T23:18:51.31855Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"3af003d6f0036250","local-member-id":"8069059f79d446ff"}
	{"level":"info","ts":"2024-07-29T23:18:51.318607Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"8069059f79d446ff","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"17cb303f1eeb5f01"}
	{"level":"warn","ts":"2024-07-29T23:19:09.155027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.709386ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1109"}
	{"level":"info","ts":"2024-07-29T23:19:09.155317Z","caller":"traceutil/trace.go:171","msg":"trace[761163028] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1021; }","duration":"109.088738ms","start":"2024-07-29T23:19:09.046193Z","end":"2024-07-29T23:19:09.155282Z","steps":["trace[761163028] 'agreement among raft nodes before linearized reading'  (duration: 47.665659ms)","trace[761163028] 'range keys from in-memory index tree'  (duration: 60.974798ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T23:19:09.192095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.48704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-238496-m03\" ","response":"range_response_count:1 size:4375"}
	{"level":"info","ts":"2024-07-29T23:19:09.192344Z","caller":"traceutil/trace.go:171","msg":"trace[352014540] range","detail":"{range_begin:/registry/minions/ha-238496-m03; range_end:; response_count:1; response_revision:1021; }","duration":"114.742006ms","start":"2024-07-29T23:19:09.077585Z","end":"2024-07-29T23:19:09.192327Z","steps":["trace[352014540] 'agreement among raft nodes before linearized reading'  (duration: 114.137886ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:19:54 up 4 min,  0 users,  load average: 0.37, 0.28, 0.12
	Linux ha-238496 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [906ec8f4a759] <==
	I0729 23:19:05.130611       1 main.go:299] handling current node
	I0729 23:19:15.127058       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0729 23:19:15.127370       1 main.go:322] Node ha-238496-m02 has CIDR [10.244.1.0/24] 
	I0729 23:19:15.127673       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0729 23:19:15.127801       1 main.go:322] Node ha-238496-m03 has CIDR [10.244.2.0/24] 
	I0729 23:19:15.127989       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 23:19:15.128033       1 main.go:299] handling current node
	I0729 23:19:25.123599       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 23:19:25.123658       1 main.go:299] handling current node
	I0729 23:19:25.123678       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0729 23:19:25.123684       1 main.go:322] Node ha-238496-m02 has CIDR [10.244.1.0/24] 
	I0729 23:19:25.123960       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0729 23:19:25.123986       1 main.go:322] Node ha-238496-m03 has CIDR [10.244.2.0/24] 
	I0729 23:19:35.127512       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 23:19:35.127587       1 main.go:299] handling current node
	I0729 23:19:35.127606       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0729 23:19:35.127612       1 main.go:322] Node ha-238496-m02 has CIDR [10.244.1.0/24] 
	I0729 23:19:35.128099       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0729 23:19:35.128132       1 main.go:322] Node ha-238496-m03 has CIDR [10.244.2.0/24] 
	I0729 23:19:45.123262       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 23:19:45.123309       1 main.go:299] handling current node
	I0729 23:19:45.123325       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0729 23:19:45.123331       1 main.go:322] Node ha-238496-m02 has CIDR [10.244.1.0/24] 
	I0729 23:19:45.123590       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0729 23:19:45.123602       1 main.go:322] Node ha-238496-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [4607f65fdc74] <==
	I0729 23:16:03.392746       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 23:16:04.170969       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 23:16:04.177627       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 23:16:04.177698       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 23:16:04.865424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 23:16:04.906737       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 23:16:04.995294       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 23:16:05.001863       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.113]
	I0729 23:16:05.003053       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 23:16:05.007289       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 23:16:05.187097       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 23:16:06.220058       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 23:16:06.244978       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 23:16:06.262209       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 23:16:19.197134       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 23:16:19.306858       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 23:19:51.666832       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46784: use of closed network connection
	E0729 23:19:51.900211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46800: use of closed network connection
	E0729 23:19:52.089837       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46812: use of closed network connection
	E0729 23:19:52.401910       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46848: use of closed network connection
	E0729 23:19:52.595609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46858: use of closed network connection
	E0729 23:19:52.792814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46886: use of closed network connection
	E0729 23:19:53.082395       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46916: use of closed network connection
	E0729 23:19:53.260351       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46932: use of closed network connection
	E0729 23:19:53.462817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46954: use of closed network connection
	
	
	==> kube-controller-manager [2127946fcd8b] <==
	I0729 23:19:16.735098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="161.7027ms"
	I0729 23:19:16.948440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="212.939882ms"
	I0729 23:19:16.995409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.910529ms"
	I0729 23:19:17.038535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.06149ms"
	I0729 23:19:17.039551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="937.395µs"
	I0729 23:19:17.067733       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.626198ms"
	I0729 23:19:17.069562       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.586µs"
	I0729 23:19:18.260665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.796µs"
	I0729 23:19:18.282572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.075µs"
	I0729 23:19:18.288809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.98µs"
	I0729 23:19:18.314246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="726.325µs"
	I0729 23:19:18.324551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.065µs"
	I0729 23:19:18.329486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.04µs"
	I0729 23:19:18.518217       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.659µs"
	I0729 23:19:18.544739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.531µs"
	I0729 23:19:19.827540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.691285ms"
	I0729 23:19:19.827958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="350.714µs"
	I0729 23:19:20.641120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.588126ms"
	I0729 23:19:20.641419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="160.847µs"
	I0729 23:19:22.391760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.170401ms"
	I0729 23:19:22.391880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.585µs"
	I0729 23:19:50.847022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.998µs"
	I0729 23:19:51.883422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.014µs"
	I0729 23:19:51.902395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.001µs"
	I0729 23:19:51.913198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.514µs"
	
	
	==> kube-proxy [2bb8727b4690] <==
	I0729 23:16:20.042835       1 server_linux.go:69] "Using iptables proxy"
	I0729 23:16:20.054809       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	I0729 23:16:20.102319       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 23:16:20.102375       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 23:16:20.102397       1 server_linux.go:165] "Using iptables Proxier"
	I0729 23:16:20.105809       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 23:16:20.106413       1 server.go:872] "Version info" version="v1.30.3"
	I0729 23:16:20.106450       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 23:16:20.108364       1 config.go:192] "Starting service config controller"
	I0729 23:16:20.108406       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 23:16:20.108456       1 config.go:101] "Starting endpoint slice config controller"
	I0729 23:16:20.108462       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 23:16:20.109671       1 config.go:319] "Starting node config controller"
	I0729 23:16:20.109718       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 23:16:20.209255       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 23:16:20.209265       1 shared_informer.go:320] Caches are synced for service config
	I0729 23:16:20.209811       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dc0f824c8c08] <==
	E0729 23:16:04.323274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 23:16:04.326788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 23:16:04.326980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 23:16:04.383451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 23:16:04.383717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 23:16:04.534628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 23:16:04.535476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 23:16:04.556376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 23:16:04.556427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 23:16:04.573697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 23:16:04.573750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 23:16:04.653266       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 23:16:04.653315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 23:16:04.740804       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 23:16:04.740888       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 23:16:06.457680       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 23:18:47.120262       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kb2hw\": pod kindnet-kb2hw is already assigned to node \"ha-238496-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-kb2hw" node="ha-238496-m03"
	E0729 23:18:47.122790       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ef875a41-530f-48ba-b034-d08a8a7acbbc(kube-system/kindnet-kb2hw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kb2hw"
	E0729 23:18:47.122887       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kb2hw\": pod kindnet-kb2hw is already assigned to node \"ha-238496-m03\"" pod="kube-system/kindnet-kb2hw"
	I0729 23:18:47.123210       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kb2hw" node="ha-238496-m03"
	I0729 23:19:16.509862       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="776d65d2-03d8-4edb-9781-6d7d4967e364" pod="default/busybox-fc5497c4f-8ql68" assumedNode="ha-238496-m03" currentNode="ha-238496-m02"
	E0729 23:19:16.512667       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8ql68\": pod busybox-fc5497c4f-8ql68 is already assigned to node \"ha-238496-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-8ql68" node="ha-238496-m02"
	E0729 23:19:16.512731       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 776d65d2-03d8-4edb-9781-6d7d4967e364(default/busybox-fc5497c4f-8ql68) was assumed on ha-238496-m02 but assigned to ha-238496-m03" pod="default/busybox-fc5497c4f-8ql68"
	E0729 23:19:16.512750       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8ql68\": pod busybox-fc5497c4f-8ql68 is already assigned to node \"ha-238496-m03\"" pod="default/busybox-fc5497c4f-8ql68"
	I0729 23:19:16.512778       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-8ql68" node="ha-238496-m03"
	
	
	==> kubelet <==
	Jul 29 23:19:16 ha-238496 kubelet[2126]: I0729 23:19:16.604936    2126 topology_manager.go:215] "Topology Admit Handler" podUID="f100377c-7ba9-4012-a20e-ac31710cdc43" podNamespace="default" podName="busybox-fc5497c4f-2rl8g"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: E0729 23:19:16.665071    2126 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-sdvlm], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-fc5497c4f-2rl8g" podUID="f100377c-7ba9-4012-a20e-ac31710cdc43"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: I0729 23:19:16.696959    2126 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdvlm\" (UniqueName: \"kubernetes.io/projected/f100377c-7ba9-4012-a20e-ac31710cdc43-kube-api-access-sdvlm\") pod \"busybox-fc5497c4f-2rl8g\" (UID: \"f100377c-7ba9-4012-a20e-ac31710cdc43\") " pod="default/busybox-fc5497c4f-2rl8g"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: I0729 23:19:16.714770    2126 status_manager.go:877] "Failed to update status for pod" pod="default/busybox-fc5497c4f-2rl8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f100377c-7ba9-4012-a20e-ac31710cdc43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-07-29T23:19:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-07-29T23:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-07-29T23:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [busybox]\\\",\\\"reason\\\":\\\"Cont
ainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-07-29T23:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [busybox]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"gcr.io/k8s-minikube/busybox:1.28\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"busybox\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.39.113\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.39.113\\\"}],\\\"startTime\\\":\\\"2024-07-29T23:19:16Z\\\"}}\" for pod \"default\"/\"busybox-fc5497c4f-2rl8g\": pods \"busybox-fc5497c4f-2rl8g\" not found"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: I0729 23:19:16.732196    2126 topology_manager.go:215] "Topology Admit Handler" podUID="22830bb4-9b86-42f8-8354-48e4b8d2f29b" podNamespace="default" podName="busybox-fc5497c4f-df7wx"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: I0729 23:19:16.743186    2126 topology_manager.go:215] "Topology Admit Handler" podUID="8ad271f0-7e72-47c4-86ce-f549f1e2fc64" podNamespace="default" podName="busybox-fc5497c4f-6d7mp"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: I0729 23:19:16.798376    2126 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2c8\" (UniqueName: \"kubernetes.io/projected/22830bb4-9b86-42f8-8354-48e4b8d2f29b-kube-api-access-kr2c8\") pod \"busybox-fc5497c4f-df7wx\" (UID: \"22830bb4-9b86-42f8-8354-48e4b8d2f29b\") " pod="default/busybox-fc5497c4f-df7wx"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: I0729 23:19:16.798673    2126 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4l6x\" (UniqueName: \"kubernetes.io/projected/8ad271f0-7e72-47c4-86ce-f549f1e2fc64-kube-api-access-h4l6x\") pod \"busybox-fc5497c4f-6d7mp\" (UID: \"8ad271f0-7e72-47c4-86ce-f549f1e2fc64\") " pod="default/busybox-fc5497c4f-6d7mp"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: E0729 23:19:16.809833    2126 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kr2c8], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-fc5497c4f-df7wx" podUID="22830bb4-9b86-42f8-8354-48e4b8d2f29b"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: E0729 23:19:16.810972    2126 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-h4l6x], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-fc5497c4f-6d7mp" podUID="8ad271f0-7e72-47c4-86ce-f549f1e2fc64"
	Jul 29 23:19:16 ha-238496 kubelet[2126]: E0729 23:19:16.813121    2126 projected.go:200] Error preparing data for projected volume kube-api-access-sdvlm for pod default/busybox-fc5497c4f-2rl8g: failed to fetch token: pod "busybox-fc5497c4f-2rl8g" not found
	Jul 29 23:19:16 ha-238496 kubelet[2126]: E0729 23:19:16.813755    2126 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f100377c-7ba9-4012-a20e-ac31710cdc43-kube-api-access-sdvlm podName:f100377c-7ba9-4012-a20e-ac31710cdc43 nodeName:}" failed. No retries permitted until 2024-07-29 23:19:17.313457402 +0000 UTC m=+191.294774399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sdvlm" (UniqueName: "kubernetes.io/projected/f100377c-7ba9-4012-a20e-ac31710cdc43-kube-api-access-sdvlm") pod "busybox-fc5497c4f-2rl8g" (UID: "f100377c-7ba9-4012-a20e-ac31710cdc43") : failed to fetch token: pod "busybox-fc5497c4f-2rl8g" not found
	Jul 29 23:19:17 ha-238496 kubelet[2126]: I0729 23:19:17.303331    2126 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr2c8\" (UniqueName: \"kubernetes.io/projected/22830bb4-9b86-42f8-8354-48e4b8d2f29b-kube-api-access-kr2c8\") pod \"22830bb4-9b86-42f8-8354-48e4b8d2f29b\" (UID: \"22830bb4-9b86-42f8-8354-48e4b8d2f29b\") "
	Jul 29 23:19:17 ha-238496 kubelet[2126]: I0729 23:19:17.303434    2126 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4l6x\" (UniqueName: \"kubernetes.io/projected/8ad271f0-7e72-47c4-86ce-f549f1e2fc64-kube-api-access-h4l6x\") pod \"8ad271f0-7e72-47c4-86ce-f549f1e2fc64\" (UID: \"8ad271f0-7e72-47c4-86ce-f549f1e2fc64\") "
	Jul 29 23:19:17 ha-238496 kubelet[2126]: I0729 23:19:17.303616    2126 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sdvlm\" (UniqueName: \"kubernetes.io/projected/f100377c-7ba9-4012-a20e-ac31710cdc43-kube-api-access-sdvlm\") on node \"ha-238496\" DevicePath \"\""
	Jul 29 23:19:17 ha-238496 kubelet[2126]: I0729 23:19:17.308363    2126 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad271f0-7e72-47c4-86ce-f549f1e2fc64-kube-api-access-h4l6x" (OuterVolumeSpecName: "kube-api-access-h4l6x") pod "8ad271f0-7e72-47c4-86ce-f549f1e2fc64" (UID: "8ad271f0-7e72-47c4-86ce-f549f1e2fc64"). InnerVolumeSpecName "kube-api-access-h4l6x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 23:19:17 ha-238496 kubelet[2126]: I0729 23:19:17.313036    2126 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22830bb4-9b86-42f8-8354-48e4b8d2f29b-kube-api-access-kr2c8" (OuterVolumeSpecName: "kube-api-access-kr2c8") pod "22830bb4-9b86-42f8-8354-48e4b8d2f29b" (UID: "22830bb4-9b86-42f8-8354-48e4b8d2f29b"). InnerVolumeSpecName "kube-api-access-kr2c8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 23:19:17 ha-238496 kubelet[2126]: I0729 23:19:17.404689    2126 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kr2c8\" (UniqueName: \"kubernetes.io/projected/22830bb4-9b86-42f8-8354-48e4b8d2f29b-kube-api-access-kr2c8\") on node \"ha-238496\" DevicePath \"\""
	Jul 29 23:19:17 ha-238496 kubelet[2126]: I0729 23:19:17.404774    2126 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h4l6x\" (UniqueName: \"kubernetes.io/projected/8ad271f0-7e72-47c4-86ce-f549f1e2fc64-kube-api-access-h4l6x\") on node \"ha-238496\" DevicePath \"\""
	Jul 29 23:19:18 ha-238496 kubelet[2126]: I0729 23:19:18.193836    2126 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f100377c-7ba9-4012-a20e-ac31710cdc43" path="/var/lib/kubelet/pods/f100377c-7ba9-4012-a20e-ac31710cdc43/volumes"
	Jul 29 23:19:18 ha-238496 kubelet[2126]: I0729 23:19:18.513992    2126 topology_manager.go:215] "Topology Admit Handler" podUID="1af09aea-0782-4e55-a8d0-349bcfa014a2" podNamespace="default" podName="busybox-fc5497c4f-ftt4w"
	Jul 29 23:19:18 ha-238496 kubelet[2126]: I0729 23:19:18.613129    2126 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bssld\" (UniqueName: \"kubernetes.io/projected/1af09aea-0782-4e55-a8d0-349bcfa014a2-kube-api-access-bssld\") pod \"busybox-fc5497c4f-ftt4w\" (UID: \"1af09aea-0782-4e55-a8d0-349bcfa014a2\") " pod="default/busybox-fc5497c4f-ftt4w"
	Jul 29 23:19:20 ha-238496 kubelet[2126]: I0729 23:19:20.184289    2126 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22830bb4-9b86-42f8-8354-48e4b8d2f29b" path="/var/lib/kubelet/pods/22830bb4-9b86-42f8-8354-48e4b8d2f29b/volumes"
	Jul 29 23:19:20 ha-238496 kubelet[2126]: I0729 23:19:20.184878    2126 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad271f0-7e72-47c4-86ce-f549f1e2fc64" path="/var/lib/kubelet/pods/8ad271f0-7e72-47c4-86ce-f549f1e2fc64/volumes"
	Jul 29 23:19:22 ha-238496 kubelet[2126]: I0729 23:19:22.342487    2126 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-ftt4w" podStartSLOduration=4.260317486 podStartE2EDuration="6.34243599s" podCreationTimestamp="2024-07-29 23:19:16 +0000 UTC" firstStartedPulling="2024-07-29 23:19:19.188446274 +0000 UTC m=+193.169763285" lastFinishedPulling="2024-07-29 23:19:21.27056479 +0000 UTC m=+195.251881789" observedRunningTime="2024-07-29 23:19:22.342022908 +0000 UTC m=+196.323339926" watchObservedRunningTime="2024-07-29 23:19:22.34243599 +0000 UTC m=+196.323753012"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-238496 -n ha-238496
helpers_test.go:261: (dbg) Run:  kubectl --context ha-238496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (39.18s)

                                                
                                    

Test pass (314/349)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 30.52
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 16.16
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.05
18 TestDownloadOnly/v1.30.3/DeleteAll 0.12
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.11
21 TestDownloadOnly/v1.31.0-beta.0/json-events 26.35
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.12
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.11
30 TestBinaryMirror 0.54
31 TestOffline 133.33
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 280.68
38 TestAddons/serial/Volcano 43.63
40 TestAddons/serial/GCPAuth/Namespaces 0.12
42 TestAddons/parallel/Registry 16.88
43 TestAddons/parallel/Ingress 22.18
44 TestAddons/parallel/InspektorGadget 10.73
45 TestAddons/parallel/MetricsServer 5.69
46 TestAddons/parallel/HelmTiller 21.54
48 TestAddons/parallel/CSI 65.48
49 TestAddons/parallel/Headlamp 23.88
50 TestAddons/parallel/CloudSpanner 6.92
51 TestAddons/parallel/LocalPath 56.55
52 TestAddons/parallel/NvidiaDevicePlugin 6.55
53 TestAddons/parallel/Yakd 12.13
54 TestAddons/StoppedEnableDisable 13.61
55 TestCertOptions 100.3
56 TestCertExpiration 321.41
57 TestDockerFlags 84.4
58 TestForceSystemdFlag 53.68
59 TestForceSystemdEnv 98.03
61 TestKVMDriverInstallOrUpdate 4.76
65 TestErrorSpam/setup 48.27
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.73
68 TestErrorSpam/pause 1.23
69 TestErrorSpam/unpause 1.34
70 TestErrorSpam/stop 15.56
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 67.47
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 43.34
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.45
82 TestFunctional/serial/CacheCmd/cache/add_local 1.43
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.14
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 40.67
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.12
93 TestFunctional/serial/LogsFileCmd 1.12
94 TestFunctional/serial/InvalidService 5.62
96 TestFunctional/parallel/ConfigCmd 0.3
97 TestFunctional/parallel/DashboardCmd 13.95
98 TestFunctional/parallel/DryRun 0.25
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.78
104 TestFunctional/parallel/ServiceCmdConnect 26.5
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 49.32
108 TestFunctional/parallel/SSHCmd 0.41
109 TestFunctional/parallel/CpCmd 1.24
110 TestFunctional/parallel/MySQL 32.12
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.29
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
120 TestFunctional/parallel/License 0.61
121 TestFunctional/parallel/Version/short 0.06
122 TestFunctional/parallel/Version/components 0.67
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
127 TestFunctional/parallel/ImageCommands/ImageBuild 4.05
128 TestFunctional/parallel/ImageCommands/Setup 1.86
138 TestFunctional/parallel/DockerEnv/bash 0.84
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.94
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
149 TestFunctional/parallel/ServiceCmd/DeployApp 21.21
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
151 TestFunctional/parallel/ServiceCmd/List 0.5
152 TestFunctional/parallel/ProfileCmd/profile_list 0.33
153 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
155 TestFunctional/parallel/MountCmd/any-port 7.38
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
157 TestFunctional/parallel/ServiceCmd/Format 0.3
158 TestFunctional/parallel/ServiceCmd/URL 0.3
159 TestFunctional/parallel/MountCmd/specific-port 1.5
160 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
161 TestFunctional/delete_echo-server_images 0.04
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.01
164 TestGvisorAddon 286.22
167 TestMultiControlPlane/serial/StartCluster 240.35
169 TestMultiControlPlane/serial/PingHostFromPods 1.25
170 TestMultiControlPlane/serial/AddWorkerNode 64.19
171 TestMultiControlPlane/serial/NodeLabels 0.07
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
173 TestMultiControlPlane/serial/CopyFile 12.76
174 TestMultiControlPlane/serial/StopSecondaryNode 13.95
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
176 TestMultiControlPlane/serial/RestartSecondaryNode 48.23
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.55
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 247.92
179 TestMultiControlPlane/serial/DeleteSecondaryNode 8.24
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
181 TestMultiControlPlane/serial/StopCluster 38.46
182 TestMultiControlPlane/serial/RestartCluster 161.37
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
184 TestMultiControlPlane/serial/AddSecondaryNode 88.36
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
188 TestImageBuild/serial/Setup 49.87
189 TestImageBuild/serial/NormalBuild 2.69
190 TestImageBuild/serial/BuildWithBuildArg 1.14
191 TestImageBuild/serial/BuildWithDockerIgnore 0.82
192 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.85
196 TestJSONOutput/start/Command 68.4
197 TestJSONOutput/start/Audit 0
199 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/pause/Command 0.62
203 TestJSONOutput/pause/Audit 0
205 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/unpause/Command 0.56
209 TestJSONOutput/unpause/Audit 0
211 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
214 TestJSONOutput/stop/Command 13.34
215 TestJSONOutput/stop/Audit 0
217 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
218 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
219 TestErrorJSONOutput 0.19
224 TestMainNoArgs 0.04
225 TestMinikubeProfile 111.25
228 TestMountStart/serial/StartWithMountFirst 36.27
229 TestMountStart/serial/VerifyMountFirst 0.36
230 TestMountStart/serial/StartWithMountSecond 33.71
231 TestMountStart/serial/VerifyMountSecond 0.35
232 TestMountStart/serial/DeleteFirst 1.06
233 TestMountStart/serial/VerifyMountPostDelete 0.36
234 TestMountStart/serial/Stop 2.27
235 TestMountStart/serial/RestartStopped 27.12
236 TestMountStart/serial/VerifyMountPostStop 0.36
239 TestMultiNode/serial/FreshStart2Nodes 142.77
240 TestMultiNode/serial/DeployApp2Nodes 5.26
241 TestMultiNode/serial/PingHostFrom2Pods 0.81
242 TestMultiNode/serial/AddNode 61.26
243 TestMultiNode/serial/MultiNodeLabels 0.06
244 TestMultiNode/serial/ProfileList 0.21
245 TestMultiNode/serial/CopyFile 6.95
246 TestMultiNode/serial/StopNode 3.39
247 TestMultiNode/serial/StartAfterStop 43.47
248 TestMultiNode/serial/RestartKeepsNodes 190.03
249 TestMultiNode/serial/DeleteNode 2.32
250 TestMultiNode/serial/StopMultiNode 25.8
251 TestMultiNode/serial/RestartMultiNode 123.34
252 TestMultiNode/serial/ValidateNameConflict 49.86
257 TestPreload 182.74
259 TestScheduledStopUnix 122.28
260 TestSkaffold 141.33
263 TestRunningBinaryUpgrade 117.58
265 TestKubernetesUpgrade 228.76
285 TestStoppedBinaryUpgrade/Setup 2.88
286 TestStoppedBinaryUpgrade/Upgrade 162.85
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
289 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
290 TestNoKubernetes/serial/StartWithK8s 60.83
292 TestPause/serial/Start 121.52
293 TestNetworkPlugins/group/auto/Start 112.25
294 TestNoKubernetes/serial/StartWithStopK8s 64.38
295 TestNetworkPlugins/group/kindnet/Start 92.18
296 TestPause/serial/SecondStartNoReconfiguration 65.46
297 TestNoKubernetes/serial/Start 53.85
298 TestNetworkPlugins/group/auto/KubeletFlags 0.21
299 TestNetworkPlugins/group/auto/NetCatPod 12.24
300 TestNetworkPlugins/group/auto/DNS 0.19
301 TestNetworkPlugins/group/auto/Localhost 0.15
302 TestNetworkPlugins/group/auto/HairPin 0.16
303 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
304 TestNoKubernetes/serial/ProfileList 1.44
305 TestNoKubernetes/serial/Stop 2.43
306 TestNoKubernetes/serial/StartNoArgs 27.05
307 TestNetworkPlugins/group/calico/Start 117.54
308 TestPause/serial/Pause 0.78
309 TestPause/serial/VerifyStatus 0.25
310 TestPause/serial/Unpause 0.56
311 TestPause/serial/PauseAgain 0.68
312 TestPause/serial/DeletePaused 1.05
313 TestPause/serial/VerifyDeletedResources 3.39
314 TestNetworkPlugins/group/custom-flannel/Start 118.17
315 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
316 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
317 TestNetworkPlugins/group/false/Start 159.11
318 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
319 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
320 TestNetworkPlugins/group/kindnet/DNS 0.21
321 TestNetworkPlugins/group/kindnet/Localhost 0.13
322 TestNetworkPlugins/group/kindnet/HairPin 0.12
323 TestNetworkPlugins/group/enable-default-cni/Start 118.82
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/calico/KubeletFlags 0.2
326 TestNetworkPlugins/group/calico/NetCatPod 11.24
327 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
329 TestNetworkPlugins/group/calico/DNS 0.2
330 TestNetworkPlugins/group/calico/Localhost 0.14
331 TestNetworkPlugins/group/calico/HairPin 0.15
332 TestNetworkPlugins/group/custom-flannel/DNS 0.2
333 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
334 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
335 TestNetworkPlugins/group/flannel/Start 79.85
336 TestNetworkPlugins/group/bridge/Start 130.34
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.23
339 TestNetworkPlugins/group/false/KubeletFlags 0.22
340 TestNetworkPlugins/group/false/NetCatPod 14.28
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
344 TestNetworkPlugins/group/false/DNS 0.18
345 TestNetworkPlugins/group/false/Localhost 0.17
346 TestNetworkPlugins/group/false/HairPin 0.15
347 TestNetworkPlugins/group/kubenet/Start 112.59
349 TestStartStop/group/old-k8s-version/serial/FirstStart 201.19
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
352 TestNetworkPlugins/group/flannel/NetCatPod 12.18
353 TestNetworkPlugins/group/flannel/DNS 0.16
354 TestNetworkPlugins/group/flannel/Localhost 0.16
355 TestNetworkPlugins/group/flannel/HairPin 0.16
357 TestStartStop/group/no-preload/serial/FirstStart 94.6
358 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
359 TestNetworkPlugins/group/bridge/NetCatPod 11.27
360 TestNetworkPlugins/group/bridge/DNS 0.2
361 TestNetworkPlugins/group/bridge/Localhost 0.18
362 TestNetworkPlugins/group/bridge/HairPin 0.17
363 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
364 TestNetworkPlugins/group/kubenet/NetCatPod 12.32
366 TestStartStop/group/embed-certs/serial/FirstStart 111.24
367 TestNetworkPlugins/group/kubenet/DNS 0.24
368 TestNetworkPlugins/group/kubenet/Localhost 0.19
369 TestNetworkPlugins/group/kubenet/HairPin 0.15
371 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.19
372 TestStartStop/group/no-preload/serial/DeployApp 9.37
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
374 TestStartStop/group/no-preload/serial/Stop 13.65
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
376 TestStartStop/group/no-preload/serial/SecondStart 313.46
377 TestStartStop/group/old-k8s-version/serial/DeployApp 10.5
378 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
379 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
380 TestStartStop/group/old-k8s-version/serial/Stop 13.34
381 TestStartStop/group/embed-certs/serial/DeployApp 9.28
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
383 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.34
384 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
385 TestStartStop/group/embed-certs/serial/Stop 12.64
386 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
387 TestStartStop/group/old-k8s-version/serial/SecondStart 393.3
388 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
389 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 334.98
390 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
391 TestStartStop/group/embed-certs/serial/SecondStart 338.22
392 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
393 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
394 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
395 TestStartStop/group/no-preload/serial/Pause 2.65
397 TestStartStop/group/newest-cni/serial/FirstStart 64.38
398 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
399 TestStartStop/group/newest-cni/serial/DeployApp 0
400 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
401 TestStartStop/group/newest-cni/serial/Stop 13.37
402 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
404 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
405 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
406 TestStartStop/group/newest-cni/serial/SecondStart 37.26
407 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
408 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.89
409 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
410 TestStartStop/group/embed-certs/serial/Pause 3.37
411 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
412 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
413 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
414 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.19
415 TestStartStop/group/newest-cni/serial/Pause 2.17
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
417 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
418 TestStartStop/group/old-k8s-version/serial/Pause 2.38
x
+
TestDownloadOnly/v1.20.0/json-events (30.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-280529 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-280529 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (30.51972044s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (30.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-280529
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-280529: exit status 85 (54.041746ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-280529 | jenkins | v1.33.1 | 29 Jul 24 23:01 UTC |          |
	|         | -p download-only-280529        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 23:01:55
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 23:01:55.099586   19423 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:01:55.099835   19423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:01:55.099845   19423 out.go:304] Setting ErrFile to fd 2...
	I0729 23:01:55.099849   19423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:01:55.100009   19423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	W0729 23:01:55.100114   19423 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19347-12221/.minikube/config/config.json: open /home/jenkins/minikube-integration/19347-12221/.minikube/config/config.json: no such file or directory
	I0729 23:01:55.100670   19423 out.go:298] Setting JSON to true
	I0729 23:01:55.101514   19423 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2611,"bootTime":1722291504,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 23:01:55.101568   19423 start.go:139] virtualization: kvm guest
	I0729 23:01:55.103651   19423 out.go:97] [download-only-280529] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 23:01:55.103762   19423 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 23:01:55.103823   19423 notify.go:220] Checking for updates...
	I0729 23:01:55.104898   19423 out.go:169] MINIKUBE_LOCATION=19347
	I0729 23:01:55.106156   19423 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 23:01:55.107449   19423 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:01:55.108749   19423 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:01:55.109967   19423 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 23:01:55.112215   19423 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 23:01:55.112485   19423 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 23:01:55.215510   19423 out.go:97] Using the kvm2 driver based on user configuration
	I0729 23:01:55.215540   19423 start.go:297] selected driver: kvm2
	I0729 23:01:55.215548   19423 start.go:901] validating driver "kvm2" against <nil>
	I0729 23:01:55.215987   19423 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 23:01:55.216117   19423 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19347-12221/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 23:01:55.230839   19423 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 23:01:55.230897   19423 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 23:01:55.231365   19423 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 23:01:55.231511   19423 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 23:01:55.231567   19423 cni.go:84] Creating CNI manager for ""
	I0729 23:01:55.231583   19423 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 23:01:55.231634   19423 start.go:340] cluster config:
	{Name:download-only-280529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-280529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:01:55.231808   19423 iso.go:125] acquiring lock: {Name:mke1b110143262a7fb7eb5e1cbaa1784fa37fd0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 23:01:55.233664   19423 out.go:97] Downloading VM boot image ...
	I0729 23:01:55.233702   19423 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19347-12221/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 23:02:04.912279   19423 out.go:97] Starting "download-only-280529" primary control-plane node in "download-only-280529" cluster
	I0729 23:02:04.912307   19423 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 23:02:05.015623   19423 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0729 23:02:05.015655   19423 cache.go:56] Caching tarball of preloaded images
	I0729 23:02:05.015837   19423 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 23:02:05.017754   19423 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 23:02:05.017779   19423 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 23:02:05.124410   19423 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0729 23:02:19.129604   19423 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 23:02:19.129690   19423 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 23:02:19.996211   19423 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 23:02:19.996531   19423 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/download-only-280529/config.json ...
	I0729 23:02:19.996558   19423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/download-only-280529/config.json: {Name:mk9dbaf2c160d526d8e267e25b648ee71f5789bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:02:19.996705   19423 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 23:02:19.996912   19423 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-280529 host does not exist
	  To start a cluster, run: "minikube start -p download-only-280529"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-280529
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (16.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-557176 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-557176 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 : (16.16232196s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (16.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-557176
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-557176: exit status 85 (52.966296ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-280529 | jenkins | v1.33.1 | 29 Jul 24 23:01 UTC |                     |
	|         | -p download-only-280529        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC | 29 Jul 24 23:02 UTC |
	| delete  | -p download-only-280529        | download-only-280529 | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC | 29 Jul 24 23:02 UTC |
	| start   | -o=json --download-only        | download-only-557176 | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC |                     |
	|         | -p download-only-557176        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 23:02:25
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 23:02:25.918224   19694 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:02:25.918340   19694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:02:25.918351   19694 out.go:304] Setting ErrFile to fd 2...
	I0729 23:02:25.918358   19694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:02:25.918549   19694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:02:25.919118   19694 out.go:298] Setting JSON to true
	I0729 23:02:25.919948   19694 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2642,"bootTime":1722291504,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 23:02:25.920006   19694 start.go:139] virtualization: kvm guest
	I0729 23:02:25.921956   19694 out.go:97] [download-only-557176] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 23:02:25.922104   19694 notify.go:220] Checking for updates...
	I0729 23:02:25.923380   19694 out.go:169] MINIKUBE_LOCATION=19347
	I0729 23:02:25.924718   19694 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 23:02:25.925941   19694 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:02:25.927283   19694 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:02:25.928718   19694 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 23:02:25.931002   19694 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 23:02:25.931183   19694 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 23:02:25.962284   19694 out.go:97] Using the kvm2 driver based on user configuration
	I0729 23:02:25.962308   19694 start.go:297] selected driver: kvm2
	I0729 23:02:25.962315   19694 start.go:901] validating driver "kvm2" against <nil>
	I0729 23:02:25.962655   19694 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 23:02:25.962761   19694 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19347-12221/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 23:02:25.977219   19694 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 23:02:25.977264   19694 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 23:02:25.977750   19694 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 23:02:25.977927   19694 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 23:02:25.977956   19694 cni.go:84] Creating CNI manager for ""
	I0729 23:02:25.977971   19694 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 23:02:25.977986   19694 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 23:02:25.978050   19694 start.go:340] cluster config:
	{Name:download-only-557176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-557176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:02:25.978156   19694 iso.go:125] acquiring lock: {Name:mke1b110143262a7fb7eb5e1cbaa1784fa37fd0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 23:02:25.979831   19694 out.go:97] Starting "download-only-557176" primary control-plane node in "download-only-557176" cluster
	I0729 23:02:25.979853   19694 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 23:02:26.081082   19694 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 23:02:26.081112   19694 cache.go:56] Caching tarball of preloaded images
	I0729 23:02:26.081259   19694 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 23:02:26.083177   19694 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 23:02:26.083197   19694 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0729 23:02:26.193013   19694 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 23:02:40.525202   19694 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0729 23:02:40.525307   19694 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-557176 host does not exist
	  To start a cluster, run: "minikube start -p download-only-557176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-557176
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (26.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-250416 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-250416 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=kvm2 : (26.353686765s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (26.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-250416
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-250416: exit status 85 (54.821093ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-280529 | jenkins | v1.33.1 | 29 Jul 24 23:01 UTC |                     |
	|         | -p download-only-280529             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC | 29 Jul 24 23:02 UTC |
	| delete  | -p download-only-280529             | download-only-280529 | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC | 29 Jul 24 23:02 UTC |
	| start   | -o=json --download-only             | download-only-557176 | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC |                     |
	|         | -p download-only-557176             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC | 29 Jul 24 23:02 UTC |
	| delete  | -p download-only-557176             | download-only-557176 | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC | 29 Jul 24 23:02 UTC |
	| start   | -o=json --download-only             | download-only-250416 | jenkins | v1.33.1 | 29 Jul 24 23:02 UTC |                     |
	|         | -p download-only-250416             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 23:02:42
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 23:02:42.370367   19917 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:02:42.370596   19917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:02:42.370604   19917 out.go:304] Setting ErrFile to fd 2...
	I0729 23:02:42.370608   19917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:02:42.370793   19917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:02:42.371289   19917 out.go:298] Setting JSON to true
	I0729 23:02:42.372069   19917 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2658,"bootTime":1722291504,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 23:02:42.372116   19917 start.go:139] virtualization: kvm guest
	I0729 23:02:42.374085   19917 out.go:97] [download-only-250416] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 23:02:42.374212   19917 notify.go:220] Checking for updates...
	I0729 23:02:42.375207   19917 out.go:169] MINIKUBE_LOCATION=19347
	I0729 23:02:42.376475   19917 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 23:02:42.377640   19917 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:02:42.378757   19917 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:02:42.380282   19917 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 23:02:42.382487   19917 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 23:02:42.382655   19917 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 23:02:42.412480   19917 out.go:97] Using the kvm2 driver based on user configuration
	I0729 23:02:42.412499   19917 start.go:297] selected driver: kvm2
	I0729 23:02:42.412504   19917 start.go:901] validating driver "kvm2" against <nil>
	I0729 23:02:42.412801   19917 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 23:02:42.412867   19917 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19347-12221/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 23:02:42.426509   19917 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 23:02:42.426541   19917 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 23:02:42.427014   19917 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 23:02:42.427184   19917 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 23:02:42.427228   19917 cni.go:84] Creating CNI manager for ""
	I0729 23:02:42.427243   19917 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 23:02:42.427252   19917 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 23:02:42.427310   19917 start.go:340] cluster config:
	{Name:download-only-250416 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-250416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:02:42.427402   19917 iso.go:125] acquiring lock: {Name:mke1b110143262a7fb7eb5e1cbaa1784fa37fd0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 23:02:42.428756   19917 out.go:97] Starting "download-only-250416" primary control-plane node in "download-only-250416" cluster
	I0729 23:02:42.428766   19917 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 23:02:42.901804   19917 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0729 23:02:42.901833   19917 cache.go:56] Caching tarball of preloaded images
	I0729 23:02:42.902005   19917 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 23:02:42.903787   19917 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 23:02:42.903807   19917 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 23:02:43.009210   19917 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0729 23:02:52.802459   19917 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 23:02:52.802543   19917 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19347-12221/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 23:02:53.443892   19917 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 23:02:53.444189   19917 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/download-only-250416/config.json ...
	I0729 23:02:53.444214   19917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/download-only-250416/config.json: {Name:mk7c6ac1589ae3049370a6ef2d01bc550ae2cf83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 23:02:53.444347   19917 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 23:02:53.444479   19917 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19347-12221/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-250416 host does not exist
	  To start a cluster, run: "minikube start -p download-only-250416"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-250416
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-596697 --alsologtostderr --binary-mirror http://127.0.0.1:35763 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-596697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-596697
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (133.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-890005 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-890005 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m12.212328207s)
helpers_test.go:175: Cleaning up "offline-docker-890005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-890005
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-890005: (1.115885404s)
--- PASS: TestOffline (133.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-050487
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-050487: exit status 85 (52.905156ms)

                                                
                                                
-- stdout --
	* Profile "addons-050487" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-050487"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-050487
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-050487: exit status 85 (50.942095ms)

                                                
                                                
-- stdout --
	* Profile "addons-050487" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-050487"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (280.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-050487 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-050487 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m40.675965229s)
--- PASS: TestAddons/Setup (280.68s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.63s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 18.623855ms
addons_test.go:905: volcano-admission stabilized in 18.719298ms
addons_test.go:897: volcano-scheduler stabilized in 18.755447ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-6c9kb" [bc861aa1-0d56-4993-9ab6-20e59275c954] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004563213s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-cd8cb" [38749db8-e0b9-4c5b-82ef-726e7656ab6c] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004431686s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-svxmb" [96a4996b-8a62-43a7-af9c-197c00c77f33] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004544033s
addons_test.go:932: (dbg) Run:  kubectl --context addons-050487 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-050487 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-050487 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b0328659-0041-4d23-9ac7-93fc11362166] Pending
helpers_test.go:344: "test-job-nginx-0" [b0328659-0041-4d23-9ac7-93fc11362166] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b0328659-0041-4d23-9ac7-93fc11362166] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 18.004579288s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-050487 addons disable volcano --alsologtostderr -v=1: (10.232450655s)
--- PASS: TestAddons/serial/Volcano (43.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-050487 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-050487 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.190094ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-k4zsw" [71bd44e8-d1d8-44cd-b216-0bfc38edff50] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006065316s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l5qwb" [18fe2c8b-96d2-47da-9b44-503765c74a2e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006463981s
addons_test.go:342: (dbg) Run:  kubectl --context addons-050487 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-050487 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-050487 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.147929632s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 ip
2024/07/29 23:09:10 [DEBUG] GET http://192.168.39.213:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-050487 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-050487 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-050487 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9b298e62-3b4f-4f21-bbf2-8bc487e54b0b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9b298e62-3b4f-4f21-bbf2-8bc487e54b0b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004890787s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-050487 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.213
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-050487 addons disable ingress-dns --alsologtostderr -v=1: (1.314169723s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-050487 addons disable ingress --alsologtostderr -v=1: (7.67679483s)
--- PASS: TestAddons/parallel/Ingress (22.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fvbpv" [0b193162-9f7b-463e-bbc3-ffed9fbb126d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004663534s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-050487
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-050487: (5.727179661s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.535021ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-jnqbj" [e8361a22-59df-4685-a2cd-3cb694b97fd6] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005133563s
addons_test.go:417: (dbg) Run:  kubectl --context addons-050487 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.69s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (21.54s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.682762ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-25lft" [9dbc71eb-3f62-494d-9036-645b7f28a587] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.011580215s
addons_test.go:475: (dbg) Run:  kubectl --context addons-050487 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-050487 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (14.956976202s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (21.54s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.35587ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-050487 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-050487 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [53740dbd-765a-4f13-8b66-aab15992c800] Pending
helpers_test.go:344: "task-pv-pod" [53740dbd-765a-4f13-8b66-aab15992c800] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [53740dbd-765a-4f13-8b66-aab15992c800] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0047259s
addons_test.go:590: (dbg) Run:  kubectl --context addons-050487 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-050487 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-050487 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-050487 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-050487 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-050487 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-050487 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3308f2e3-378a-4830-b6cc-47b83096292b] Pending
helpers_test.go:344: "task-pv-pod-restore" [3308f2e3-378a-4830-b6cc-47b83096292b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3308f2e3-378a-4830-b6cc-47b83096292b] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004224473s
addons_test.go:632: (dbg) Run:  kubectl --context addons-050487 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-050487 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-050487 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-050487 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.807878926s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-050487 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-wsj4b" [189f2a89-95b5-46ac-ba69-47c38bffe264] Pending
helpers_test.go:344: "headlamp-7867546754-wsj4b" [189f2a89-95b5-46ac-ba69-47c38bffe264] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-wsj4b" [189f2a89-95b5-46ac-ba69-47c38bffe264] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.003912122s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-050487 addons disable headlamp --alsologtostderr -v=1: (5.996642787s)
--- PASS: TestAddons/parallel/Headlamp (23.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-cwghm" [fc4be576-72f2-4e55-9719-b2f97d3f1031] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00612243s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-050487
--- PASS: TestAddons/parallel/CloudSpanner (6.92s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-050487 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-050487 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-050487 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [57e16aa1-a799-4262-8351-49c0276d61bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [57e16aa1-a799-4262-8351-49c0276d61bc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [57e16aa1-a799-4262-8351-49c0276d61bc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00412964s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-050487 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 ssh "cat /opt/local-path-provisioner/pvc-189e4d23-d506-4b6f-ae6f-4fa19a8c60c9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-050487 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-050487 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-050487 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.591986253s)
--- PASS: TestAddons/parallel/LocalPath (56.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zpdf6" [ada91c5d-5b0a-4d91-af0c-d16918b2d36e] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007023983s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-050487
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-88bwj" [391448d7-90bb-4788-a2aa-0ab3b82d227f] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006615218s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-050487 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-050487 addons disable yakd --alsologtostderr -v=1: (6.117736536s)
--- PASS: TestAddons/parallel/Yakd (12.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-050487
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-050487: (13.342533323s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-050487
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-050487
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-050487
--- PASS: TestAddons/StoppedEnableDisable (13.61s)

                                                
                                    
x
+
TestCertOptions (100.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-665946 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-665946 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m38.713827053s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-665946 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-665946 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-665946 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-665946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-665946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-665946: (1.103533149s)
--- PASS: TestCertOptions (100.30s)

                                                
                                    
x
+
TestCertExpiration (321.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-610525 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-610525 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m49.457656882s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-610525 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-610525 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (30.917560451s)
helpers_test.go:175: Cleaning up "cert-expiration-610525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-610525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-610525: (1.031663253s)
--- PASS: TestCertExpiration (321.41s)

                                                
                                    
x
+
TestDockerFlags (84.4s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-145641 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-145641 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m22.836795971s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-145641 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-145641 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-145641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-145641
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-145641: (1.085053628s)
--- PASS: TestDockerFlags (84.40s)

                                                
                                    
x
+
TestForceSystemdFlag (53.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-939196 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-939196 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (52.272086967s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-939196 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-939196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-939196
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-939196: (1.13333193s)
--- PASS: TestForceSystemdFlag (53.68s)

                                                
                                    
x
+
TestForceSystemdEnv (98.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-536378 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-536378 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m36.661736608s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-536378 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-536378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-536378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-536378: (1.056215706s)
--- PASS: TestForceSystemdEnv (98.03s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.76s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.76s)

                                                
                                    
x
+
TestErrorSpam/setup (48.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-051496 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-051496 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-051496 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-051496 --driver=kvm2 : (48.272124404s)
--- PASS: TestErrorSpam/setup (48.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 pause
--- PASS: TestErrorSpam/pause (1.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (15.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 stop: (12.490735282s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 stop: (1.559446076s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-051496 --log_dir /tmp/nospam-051496 stop: (1.504456705s)
--- PASS: TestErrorSpam/stop (15.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19347-12221/.minikube/files/etc/test/nested/copy/19411/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652848 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-652848 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m7.473363993s)
--- PASS: TestFunctional/serial/StartWithProxy (67.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652848 --alsologtostderr -v=8
E0729 23:12:50.600514   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:50.606238   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:50.616487   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:50.636825   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:50.677140   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:50.757476   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:50.917871   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:51.238465   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:51.879429   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:53.159963   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:12:55.721024   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:13:00.841638   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:13:11.082672   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-652848 --alsologtostderr -v=8: (43.337613571s)
functional_test.go:659: soft start took 43.338325808s for "functional-652848" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-652848 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cache add registry.k8s.io/pause:latest
E0729 23:13:31.563757   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-652848 /tmp/TestFunctionalserialCacheCmdcacheadd_local2037000573/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cache add minikube-local-cache-test:functional-652848
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-652848 cache add minikube-local-cache-test:functional-652848: (1.110377997s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cache delete minikube-local-cache-test:functional-652848
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-652848
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.167958ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 kubectl -- --context functional-652848 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-652848 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 23:14:12.523915   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-652848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.671265453s)
functional_test.go:757: restart took 40.67136981s for "functional-652848" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-652848 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-652848 logs: (1.115346071s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 logs --file /tmp/TestFunctionalserialLogsFileCmd4103820040/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-652848 logs --file /tmp/TestFunctionalserialLogsFileCmd4103820040/001/logs.txt: (1.116632569s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-652848 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-652848
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-652848: exit status 115 (281.349369ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.58:31838 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-652848 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-652848 delete -f testdata/invalidsvc.yaml: (2.141728432s)
--- PASS: TestFunctional/serial/InvalidService (5.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 config get cpus: exit status 14 (50.177548ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 config get cpus: exit status 14 (46.960531ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-652848 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-652848 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28478: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652848 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-652848 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (129.420489ms)

                                                
                                                
-- stdout --
	* [functional-652848] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19347
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 23:14:55.404687   28342 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:14:55.404889   28342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:14:55.404897   28342 out.go:304] Setting ErrFile to fd 2...
	I0729 23:14:55.404901   28342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:14:55.405053   28342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:14:55.405580   28342 out.go:298] Setting JSON to false
	I0729 23:14:55.406486   28342 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3391,"bootTime":1722291504,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 23:14:55.406542   28342 start.go:139] virtualization: kvm guest
	I0729 23:14:55.408323   28342 out.go:177] * [functional-652848] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 23:14:55.409498   28342 notify.go:220] Checking for updates...
	I0729 23:14:55.409517   28342 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 23:14:55.410718   28342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 23:14:55.411845   28342 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:14:55.413126   28342 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:14:55.414175   28342 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 23:14:55.415221   28342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 23:14:55.416584   28342 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:14:55.417086   28342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:14:55.417142   28342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:14:55.431961   28342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0729 23:14:55.432338   28342 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:14:55.432824   28342 main.go:141] libmachine: Using API Version  1
	I0729 23:14:55.432857   28342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:14:55.433280   28342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:14:55.433492   28342 main.go:141] libmachine: (functional-652848) Calling .DriverName
	I0729 23:14:55.433783   28342 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 23:14:55.434058   28342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:14:55.434101   28342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:14:55.448192   28342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0729 23:14:55.448583   28342 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:14:55.449006   28342 main.go:141] libmachine: Using API Version  1
	I0729 23:14:55.449026   28342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:14:55.449303   28342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:14:55.449467   28342 main.go:141] libmachine: (functional-652848) Calling .DriverName
	I0729 23:14:55.480532   28342 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 23:14:55.481821   28342 start.go:297] selected driver: kvm2
	I0729 23:14:55.481841   28342 start.go:901] validating driver "kvm2" against &{Name:functional-652848 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-652848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:14:55.481946   28342 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 23:14:55.484050   28342 out.go:177] 
	W0729 23:14:55.485233   28342 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 23:14:55.486297   28342 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652848 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652848 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-652848 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (135.881879ms)

                                                
                                                
-- stdout --
	* [functional-652848] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19347
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 23:14:55.267499   28314 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:14:55.267635   28314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:14:55.267646   28314 out.go:304] Setting ErrFile to fd 2...
	I0729 23:14:55.267653   28314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:14:55.267910   28314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:14:55.268414   28314 out.go:298] Setting JSON to false
	I0729 23:14:55.269449   28314 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3391,"bootTime":1722291504,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 23:14:55.269512   28314 start.go:139] virtualization: kvm guest
	I0729 23:14:55.271340   28314 out.go:177] * [functional-652848] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 23:14:55.272635   28314 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 23:14:55.272719   28314 notify.go:220] Checking for updates...
	I0729 23:14:55.274929   28314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 23:14:55.276129   28314 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	I0729 23:14:55.277371   28314 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	I0729 23:14:55.278429   28314 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 23:14:55.279677   28314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 23:14:55.281255   28314 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:14:55.281649   28314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:14:55.281706   28314 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:14:55.296613   28314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I0729 23:14:55.297073   28314 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:14:55.297663   28314 main.go:141] libmachine: Using API Version  1
	I0729 23:14:55.297694   28314 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:14:55.298028   28314 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:14:55.298214   28314 main.go:141] libmachine: (functional-652848) Calling .DriverName
	I0729 23:14:55.298474   28314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 23:14:55.298870   28314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:14:55.298913   28314 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:14:55.314021   28314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I0729 23:14:55.314500   28314 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:14:55.315053   28314 main.go:141] libmachine: Using API Version  1
	I0729 23:14:55.315074   28314 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:14:55.315403   28314 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:14:55.315583   28314 main.go:141] libmachine: (functional-652848) Calling .DriverName
	I0729 23:14:55.350684   28314 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 23:14:55.351871   28314 start.go:297] selected driver: kvm2
	I0729 23:14:55.351883   28314 start.go:901] validating driver "kvm2" against &{Name:functional-652848 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-652848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 23:14:55.351993   28314 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 23:14:55.353953   28314 out.go:177] 
	W0729 23:14:55.355193   28314 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 23:14:55.356320   28314 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-652848 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-652848 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-vzcmj" [82284e7e-7564-4116-8e62-10089838b007] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-vzcmj" [82284e7e-7564-4116-8e62-10089838b007] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.004093357s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.58:30195
functional_test.go:1671: http://192.168.39.58:30195: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-vzcmj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.58:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.58:30195
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [98f9ee50-ad3f-4d71-a684-720dd1e39b78] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007507507s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-652848 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-652848 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-652848 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-652848 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-652848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02e8f805-bf17-4ec5-a4af-20f815ae2a32] Pending
helpers_test.go:344: "sp-pod" [02e8f805-bf17-4ec5-a4af-20f815ae2a32] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [02e8f805-bf17-4ec5-a4af-20f815ae2a32] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.004321298s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-652848 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-652848 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-652848 delete -f testdata/storage-provisioner/pod.yaml: (1.922473533s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-652848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eade74f0-1464-41ba-b48d-422d45e1b918] Pending
helpers_test.go:344: "sp-pod" [eade74f0-1464-41ba-b48d-422d45e1b918] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/07/29 23:15:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [eade74f0-1464-41ba-b48d-422d45e1b918] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006607772s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-652848 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh -n functional-652848 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cp functional-652848:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd310132131/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh -n functional-652848 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh -n functional-652848 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-652848 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-mgmbs" [c0aacefa-1fe7-466f-bbbf-59956e0ec0a6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-mgmbs" [c0aacefa-1fe7-466f-bbbf-59956e0ec0a6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.009601535s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-652848 exec mysql-64454c8b5c-mgmbs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-652848 exec mysql-64454c8b5c-mgmbs -- mysql -ppassword -e "show databases;": exit status 1 (292.146952ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-652848 exec mysql-64454c8b5c-mgmbs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-652848 exec mysql-64454c8b5c-mgmbs -- mysql -ppassword -e "show databases;": exit status 1 (326.044849ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-652848 exec mysql-64454c8b5c-mgmbs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-652848 exec mysql-64454c8b5c-mgmbs -- mysql -ppassword -e "show databases;": exit status 1 (447.713254ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-652848 exec mysql-64454c8b5c-mgmbs -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/19411/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo cat /etc/test/nested/copy/19411/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/19411.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo cat /etc/ssl/certs/19411.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/19411.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo cat /usr/share/ca-certificates/19411.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/194112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo cat /etc/ssl/certs/194112.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/194112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo cat /usr/share/ca-certificates/194112.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-652848 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 ssh "sudo systemctl is-active crio": exit status 1 (225.983764ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652848 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-652848
docker.io/kicbase/echo-server:functional-652848
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-652848 image ls --format short --alsologtostderr:
I0729 23:14:57.730826   28560 out.go:291] Setting OutFile to fd 1 ...
I0729 23:14:57.731092   28560 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:14:57.731101   28560 out.go:304] Setting ErrFile to fd 2...
I0729 23:14:57.731105   28560 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:14:57.731254   28560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
I0729 23:14:57.731749   28560 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:14:57.731840   28560 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:14:57.732194   28560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:14:57.732235   28560 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:14:57.747037   28560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
I0729 23:14:57.747548   28560 main.go:141] libmachine: () Calling .GetVersion
I0729 23:14:57.748094   28560 main.go:141] libmachine: Using API Version  1
I0729 23:14:57.748118   28560 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:14:57.748446   28560 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:14:57.748641   28560 main.go:141] libmachine: (functional-652848) Calling .GetState
I0729 23:14:57.750366   28560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:14:57.750410   28560 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:14:57.765317   28560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44773
I0729 23:14:57.765671   28560 main.go:141] libmachine: () Calling .GetVersion
I0729 23:14:57.766165   28560 main.go:141] libmachine: Using API Version  1
I0729 23:14:57.766189   28560 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:14:57.766486   28560 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:14:57.766648   28560 main.go:141] libmachine: (functional-652848) Calling .DriverName
I0729 23:14:57.766858   28560 ssh_runner.go:195] Run: systemctl --version
I0729 23:14:57.766880   28560 main.go:141] libmachine: (functional-652848) Calling .GetSSHHostname
I0729 23:14:57.769300   28560 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:14:57.769667   28560 main.go:141] libmachine: (functional-652848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:32:82", ip: ""} in network mk-functional-652848: {Iface:virbr1 ExpiryTime:2024-07-30 00:11:53 +0000 UTC Type:0 Mac:52:54:00:6e:32:82 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:functional-652848 Clientid:01:52:54:00:6e:32:82}
I0729 23:14:57.769693   28560 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined IP address 192.168.39.58 and MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:14:57.769834   28560 main.go:141] libmachine: (functional-652848) Calling .GetSSHPort
I0729 23:14:57.769989   28560 main.go:141] libmachine: (functional-652848) Calling .GetSSHKeyPath
I0729 23:14:57.770119   28560 main.go:141] libmachine: (functional-652848) Calling .GetSSHUsername
I0729 23:14:57.770240   28560 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/functional-652848/id_rsa Username:docker}
I0729 23:14:57.859231   28560 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0729 23:14:57.927640   28560 main.go:141] libmachine: Making call to close driver server
I0729 23:14:57.927652   28560 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:14:57.927930   28560 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:14:57.927949   28560 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 23:14:57.927959   28560 main.go:141] libmachine: Making call to close driver server
I0729 23:14:57.927968   28560 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:14:57.928274   28560 main.go:141] libmachine: (functional-652848) DBG | Closing plugin on server side
I0729 23:14:57.928286   28560 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:14:57.928316   28560 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652848 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kicbase/echo-server               | functional-652848 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-652848 | 24efd5b39f38e | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-652848 | 4946add03fdd6 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-652848 image ls --format table --alsologtostderr:
I0729 23:15:02.581179   29094 out.go:291] Setting OutFile to fd 1 ...
I0729 23:15:02.581307   29094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:15:02.581316   29094 out.go:304] Setting ErrFile to fd 2...
I0729 23:15:02.581320   29094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:15:02.581530   29094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
I0729 23:15:02.582091   29094 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:15:02.582200   29094 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:15:02.582584   29094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:15:02.582635   29094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:15:02.600029   29094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33285
I0729 23:15:02.600575   29094 main.go:141] libmachine: () Calling .GetVersion
I0729 23:15:02.601131   29094 main.go:141] libmachine: Using API Version  1
I0729 23:15:02.601151   29094 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:15:02.601478   29094 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:15:02.601648   29094 main.go:141] libmachine: (functional-652848) Calling .GetState
I0729 23:15:02.603436   29094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:15:02.603477   29094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:15:02.619022   29094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
I0729 23:15:02.619510   29094 main.go:141] libmachine: () Calling .GetVersion
I0729 23:15:02.619999   29094 main.go:141] libmachine: Using API Version  1
I0729 23:15:02.620023   29094 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:15:02.620406   29094 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:15:02.620600   29094 main.go:141] libmachine: (functional-652848) Calling .DriverName
I0729 23:15:02.620796   29094 ssh_runner.go:195] Run: systemctl --version
I0729 23:15:02.620828   29094 main.go:141] libmachine: (functional-652848) Calling .GetSSHHostname
I0729 23:15:02.623313   29094 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:15:02.623800   29094 main.go:141] libmachine: (functional-652848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:32:82", ip: ""} in network mk-functional-652848: {Iface:virbr1 ExpiryTime:2024-07-30 00:11:53 +0000 UTC Type:0 Mac:52:54:00:6e:32:82 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:functional-652848 Clientid:01:52:54:00:6e:32:82}
I0729 23:15:02.623849   29094 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined IP address 192.168.39.58 and MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:15:02.624096   29094 main.go:141] libmachine: (functional-652848) Calling .GetSSHPort
I0729 23:15:02.624277   29094 main.go:141] libmachine: (functional-652848) Calling .GetSSHKeyPath
I0729 23:15:02.624462   29094 main.go:141] libmachine: (functional-652848) Calling .GetSSHUsername
I0729 23:15:02.624607   29094 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/functional-652848/id_rsa Username:docker}
I0729 23:15:02.797080   29094 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0729 23:15:02.857033   29094 main.go:141] libmachine: Making call to close driver server
I0729 23:15:02.857045   29094 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:15:02.857329   29094 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:15:02.857354   29094 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 23:15:02.857364   29094 main.go:141] libmachine: Making call to close driver server
I0729 23:15:02.857372   29094 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:15:02.857398   29094 main.go:141] libmachine: (functional-652848) DBG | Closing plugin on server side
I0729 23:15:02.857686   29094 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:15:02.857713   29094 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652848 image ls --format json --alsologtostderr:
[{"id":"24efd5b39f38e8593d39105ecd5d2ae64ae4e6bd284ad619aeda651d78484a05","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-652848"],"size":"1240000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"4946add03fdd67fbb31d6ad97d3fe1f88cc19dffc7679be2e7330ad97f872028","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-652848"],"size":"30"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b7
4c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-652848"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.
io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-652848 image ls --format json --alsologtostderr:
I0729 23:15:02.243209   28962 out.go:291] Setting OutFile to fd 1 ...
I0729 23:15:02.243304   28962 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:15:02.243311   28962 out.go:304] Setting ErrFile to fd 2...
I0729 23:15:02.243315   28962 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:15:02.243479   28962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
I0729 23:15:02.244002   28962 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:15:02.244095   28962 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:15:02.244506   28962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:15:02.244540   28962 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:15:02.259494   28962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
I0729 23:15:02.259958   28962 main.go:141] libmachine: () Calling .GetVersion
I0729 23:15:02.260512   28962 main.go:141] libmachine: Using API Version  1
I0729 23:15:02.260541   28962 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:15:02.260906   28962 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:15:02.261086   28962 main.go:141] libmachine: (functional-652848) Calling .GetState
I0729 23:15:02.263012   28962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:15:02.263053   28962 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:15:02.280734   28962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35281
I0729 23:15:02.281112   28962 main.go:141] libmachine: () Calling .GetVersion
I0729 23:15:02.281565   28962 main.go:141] libmachine: Using API Version  1
I0729 23:15:02.281581   28962 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:15:02.281899   28962 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:15:02.282029   28962 main.go:141] libmachine: (functional-652848) Calling .DriverName
I0729 23:15:02.282218   28962 ssh_runner.go:195] Run: systemctl --version
I0729 23:15:02.282241   28962 main.go:141] libmachine: (functional-652848) Calling .GetSSHHostname
I0729 23:15:02.285416   28962 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:15:02.285832   28962 main.go:141] libmachine: (functional-652848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:32:82", ip: ""} in network mk-functional-652848: {Iface:virbr1 ExpiryTime:2024-07-30 00:11:53 +0000 UTC Type:0 Mac:52:54:00:6e:32:82 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:functional-652848 Clientid:01:52:54:00:6e:32:82}
I0729 23:15:02.285856   28962 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined IP address 192.168.39.58 and MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:15:02.286099   28962 main.go:141] libmachine: (functional-652848) Calling .GetSSHPort
I0729 23:15:02.286244   28962 main.go:141] libmachine: (functional-652848) Calling .GetSSHKeyPath
I0729 23:15:02.286355   28962 main.go:141] libmachine: (functional-652848) Calling .GetSSHUsername
I0729 23:15:02.286450   28962 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/functional-652848/id_rsa Username:docker}
I0729 23:15:02.382978   28962 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0729 23:15:02.524261   28962 main.go:141] libmachine: Making call to close driver server
I0729 23:15:02.524279   28962 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:15:02.524529   28962 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:15:02.524552   28962 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 23:15:02.524561   28962 main.go:141] libmachine: Making call to close driver server
I0729 23:15:02.524570   28962 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:15:02.526284   28962 main.go:141] libmachine: (functional-652848) DBG | Closing plugin on server side
I0729 23:15:02.526336   28962 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:15:02.526359   28962 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652848 image ls --format yaml --alsologtostderr:
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 4946add03fdd67fbb31d6ad97d3fe1f88cc19dffc7679be2e7330ad97f872028
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-652848
size: "30"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-652848
size: "4940000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-652848 image ls --format yaml --alsologtostderr:
I0729 23:14:57.979262   28584 out.go:291] Setting OutFile to fd 1 ...
I0729 23:14:57.979547   28584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:14:57.979562   28584 out.go:304] Setting ErrFile to fd 2...
I0729 23:14:57.979568   28584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:14:57.979835   28584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
I0729 23:14:57.980600   28584 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:14:57.980755   28584 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:14:57.981307   28584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:14:57.981361   28584 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:14:57.997228   28584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
I0729 23:14:57.997736   28584 main.go:141] libmachine: () Calling .GetVersion
I0729 23:14:57.998344   28584 main.go:141] libmachine: Using API Version  1
I0729 23:14:57.998365   28584 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:14:57.998718   28584 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:14:57.998911   28584 main.go:141] libmachine: (functional-652848) Calling .GetState
I0729 23:14:58.000861   28584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:14:58.000906   28584 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:14:58.016450   28584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
I0729 23:14:58.016823   28584 main.go:141] libmachine: () Calling .GetVersion
I0729 23:14:58.017327   28584 main.go:141] libmachine: Using API Version  1
I0729 23:14:58.017352   28584 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:14:58.017717   28584 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:14:58.017942   28584 main.go:141] libmachine: (functional-652848) Calling .DriverName
I0729 23:14:58.018137   28584 ssh_runner.go:195] Run: systemctl --version
I0729 23:14:58.018158   28584 main.go:141] libmachine: (functional-652848) Calling .GetSSHHostname
I0729 23:14:58.021004   28584 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:14:58.021472   28584 main.go:141] libmachine: (functional-652848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:32:82", ip: ""} in network mk-functional-652848: {Iface:virbr1 ExpiryTime:2024-07-30 00:11:53 +0000 UTC Type:0 Mac:52:54:00:6e:32:82 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:functional-652848 Clientid:01:52:54:00:6e:32:82}
I0729 23:14:58.021508   28584 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined IP address 192.168.39.58 and MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:14:58.021656   28584 main.go:141] libmachine: (functional-652848) Calling .GetSSHPort
I0729 23:14:58.021822   28584 main.go:141] libmachine: (functional-652848) Calling .GetSSHKeyPath
I0729 23:14:58.021969   28584 main.go:141] libmachine: (functional-652848) Calling .GetSSHUsername
I0729 23:14:58.022110   28584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/functional-652848/id_rsa Username:docker}
I0729 23:14:58.113233   28584 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0729 23:14:58.144837   28584 main.go:141] libmachine: Making call to close driver server
I0729 23:14:58.144856   28584 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:14:58.145128   28584 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:14:58.145155   28584 main.go:141] libmachine: (functional-652848) DBG | Closing plugin on server side
I0729 23:14:58.145164   28584 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 23:14:58.145215   28584 main.go:141] libmachine: Making call to close driver server
I0729 23:14:58.145223   28584 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:14:58.145413   28584 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:14:58.145486   28584 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 23:14:58.145434   28584 main.go:141] libmachine: (functional-652848) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 ssh pgrep buildkitd: exit status 1 (183.642791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image build -t localhost/my-image:functional-652848 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-652848 image build -t localhost/my-image:functional-652848 testdata/build --alsologtostderr: (3.633497369s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-652848 image build -t localhost/my-image:functional-652848 testdata/build --alsologtostderr:
I0729 23:14:58.373933   28637 out.go:291] Setting OutFile to fd 1 ...
I0729 23:14:58.374198   28637 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:14:58.374208   28637 out.go:304] Setting ErrFile to fd 2...
I0729 23:14:58.374212   28637 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 23:14:58.374370   28637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
I0729 23:14:58.374922   28637 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:14:58.375471   28637 config.go:182] Loaded profile config "functional-652848": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 23:14:58.375822   28637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:14:58.375857   28637 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:14:58.391525   28637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39127
I0729 23:14:58.391924   28637 main.go:141] libmachine: () Calling .GetVersion
I0729 23:14:58.392465   28637 main.go:141] libmachine: Using API Version  1
I0729 23:14:58.392489   28637 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:14:58.392845   28637 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:14:58.393033   28637 main.go:141] libmachine: (functional-652848) Calling .GetState
I0729 23:14:58.394568   28637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 23:14:58.394603   28637 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 23:14:58.408666   28637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
I0729 23:14:58.409025   28637 main.go:141] libmachine: () Calling .GetVersion
I0729 23:14:58.409455   28637 main.go:141] libmachine: Using API Version  1
I0729 23:14:58.409475   28637 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 23:14:58.409726   28637 main.go:141] libmachine: () Calling .GetMachineName
I0729 23:14:58.409895   28637 main.go:141] libmachine: (functional-652848) Calling .DriverName
I0729 23:14:58.410066   28637 ssh_runner.go:195] Run: systemctl --version
I0729 23:14:58.410094   28637 main.go:141] libmachine: (functional-652848) Calling .GetSSHHostname
I0729 23:14:58.412621   28637 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:14:58.412986   28637 main.go:141] libmachine: (functional-652848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:32:82", ip: ""} in network mk-functional-652848: {Iface:virbr1 ExpiryTime:2024-07-30 00:11:53 +0000 UTC Type:0 Mac:52:54:00:6e:32:82 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:functional-652848 Clientid:01:52:54:00:6e:32:82}
I0729 23:14:58.413011   28637 main.go:141] libmachine: (functional-652848) DBG | domain functional-652848 has defined IP address 192.168.39.58 and MAC address 52:54:00:6e:32:82 in network mk-functional-652848
I0729 23:14:58.413128   28637 main.go:141] libmachine: (functional-652848) Calling .GetSSHPort
I0729 23:14:58.413298   28637 main.go:141] libmachine: (functional-652848) Calling .GetSSHKeyPath
I0729 23:14:58.413451   28637 main.go:141] libmachine: (functional-652848) Calling .GetSSHUsername
I0729 23:14:58.413576   28637 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/functional-652848/id_rsa Username:docker}
I0729 23:14:58.494431   28637 build_images.go:161] Building image from path: /tmp/build.2939735562.tar
I0729 23:14:58.494504   28637 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 23:14:58.506048   28637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2939735562.tar
I0729 23:14:58.510823   28637 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2939735562.tar: stat -c "%s %y" /var/lib/minikube/build/build.2939735562.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2939735562.tar': No such file or directory
I0729 23:14:58.510854   28637 ssh_runner.go:362] scp /tmp/build.2939735562.tar --> /var/lib/minikube/build/build.2939735562.tar (3072 bytes)
I0729 23:14:58.540996   28637 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2939735562
I0729 23:14:58.551275   28637 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2939735562 -xf /var/lib/minikube/build/build.2939735562.tar
I0729 23:14:58.561383   28637 docker.go:360] Building image: /var/lib/minikube/build/build.2939735562
I0729 23:14:58.561453   28637 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-652848 /var/lib/minikube/build/build.2939735562
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.8s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:24efd5b39f38e8593d39105ecd5d2ae64ae4e6bd284ad619aeda651d78484a05 done
#8 naming to localhost/my-image:functional-652848 done
#8 DONE 0.1s
I0729 23:15:01.935459   28637 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-652848 /var/lib/minikube/build/build.2939735562: (3.373979122s)
I0729 23:15:01.935556   28637 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2939735562
I0729 23:15:01.948757   28637 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2939735562.tar
I0729 23:15:01.962405   28637 build_images.go:217] Built localhost/my-image:functional-652848 from /tmp/build.2939735562.tar
I0729 23:15:01.962435   28637 build_images.go:133] succeeded building to: functional-652848
I0729 23:15:01.962442   28637 build_images.go:134] failed building to: 
I0729 23:15:01.962466   28637 main.go:141] libmachine: Making call to close driver server
I0729 23:15:01.962479   28637 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:15:01.962780   28637 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:15:01.962796   28637 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 23:15:01.962805   28637 main.go:141] libmachine: Making call to close driver server
I0729 23:15:01.962812   28637 main.go:141] libmachine: (functional-652848) Calling .Close
I0729 23:15:01.963075   28637 main.go:141] libmachine: Successfully made call to close driver server
I0729 23:15:01.963093   28637 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.834991677s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-652848
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-652848 docker-env) && out/minikube-linux-amd64 status -p functional-652848"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-652848 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image load --daemon docker.io/kicbase/echo-server:functional-652848 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image load --daemon docker.io/kicbase/echo-server:functional-652848 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-652848
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image load --daemon docker.io/kicbase/echo-server:functional-652848 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image save docker.io/kicbase/echo-server:functional-652848 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image rm docker.io/kicbase/echo-server:functional-652848 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-652848
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 image save --daemon docker.io/kicbase/echo-server:functional-652848 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-652848
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-652848 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-652848 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-7hh2s" [527e1e77-5aba-4d0a-b268-a2af894f57a8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-7hh2s" [527e1e77-5aba-4d0a-b268-a2af894f57a8] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.005305175s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "288.075434ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "45.06658ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 service list -o json
functional_test.go:1490: Took "463.614625ms" to run "out/minikube-linux-amd64 -p functional-652848 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "229.246216ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "53.214389ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdany-port2010493156/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722294893395529981" to /tmp/TestFunctionalparallelMountCmdany-port2010493156/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722294893395529981" to /tmp/TestFunctionalparallelMountCmdany-port2010493156/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722294893395529981" to /tmp/TestFunctionalparallelMountCmdany-port2010493156/001/test-1722294893395529981
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.799319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 23:14 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 23:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 23:14 test-1722294893395529981
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh cat /mount-9p/test-1722294893395529981
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-652848 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [133f80e6-83f1-4194-8c65-387132745a56] Pending
helpers_test.go:344: "busybox-mount" [133f80e6-83f1-4194-8c65-387132745a56] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [133f80e6-83f1-4194-8c65-387132745a56] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [133f80e6-83f1-4194-8c65-387132745a56] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004773368s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-652848 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdany-port2010493156/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.58:30503
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.58:30503
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdspecific-port485289783/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (191.120699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdspecific-port485289783/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 ssh "sudo umount -f /mount-9p": exit status 1 (219.308806ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-652848 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdspecific-port485289783/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1615467909/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1615467909/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1615467909/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T" /mount1: exit status 1 (253.165778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-652848 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-652848 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1615467909/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1615467909/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1615467909/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-652848
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-652848
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-652848
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (286.22s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-193483 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-193483 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m1.357907161s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-193483 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0729 23:57:50.599948   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-193483 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.345072927s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-193483 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-193483 addons enable gvisor: (4.596634472s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [31baedef-267f-4e12-a209-0d6a5967a3ac] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00684072s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-193483 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [e7dda15f-df28-4470-9f9f-74ec400dfe90] Pending
helpers_test.go:344: "nginx-gvisor" [e7dda15f-df28-4470-9f9f-74ec400dfe90] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [e7dda15f-df28-4470-9f9f-74ec400dfe90] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 54.006795817s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-193483
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-193483: (7.324502719s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-193483 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0729 23:59:24.228886   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-193483 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (56.192614539s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [31baedef-267f-4e12-a209-0d6a5967a3ac] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.006881048s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [e7dda15f-df28-4470-9f9f-74ec400dfe90] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.008708104s
helpers_test.go:175: Cleaning up "gvisor-193483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-193483
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-193483: (1.061783033s)
--- PASS: TestGvisorAddon (286.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (240.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-238496 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0729 23:15:34.444075   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:17:50.600634   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:18:18.284878   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-238496 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m59.693857282s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (240.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-d42qb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-d42qb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-ftt4w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-ftt4w -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-scl6h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-238496 -- exec busybox-fc5497c4f-scl6h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (64.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-238496 -v=7 --alsologtostderr
E0729 23:20:05.189627   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:20:46.150082   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-238496 -v=7 --alsologtostderr: (1m3.332460189s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (64.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-238496 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp testdata/cp-test.txt ha-238496:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2298622597/001/cp-test_ha-238496.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496:/home/docker/cp-test.txt ha-238496-m02:/home/docker/cp-test_ha-238496_ha-238496-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m02 "sudo cat /home/docker/cp-test_ha-238496_ha-238496-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496:/home/docker/cp-test.txt ha-238496-m03:/home/docker/cp-test_ha-238496_ha-238496-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m03 "sudo cat /home/docker/cp-test_ha-238496_ha-238496-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496:/home/docker/cp-test.txt ha-238496-m04:/home/docker/cp-test_ha-238496_ha-238496-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m04 "sudo cat /home/docker/cp-test_ha-238496_ha-238496-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp testdata/cp-test.txt ha-238496-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2298622597/001/cp-test_ha-238496-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m02:/home/docker/cp-test.txt ha-238496:/home/docker/cp-test_ha-238496-m02_ha-238496.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496 "sudo cat /home/docker/cp-test_ha-238496-m02_ha-238496.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m02:/home/docker/cp-test.txt ha-238496-m03:/home/docker/cp-test_ha-238496-m02_ha-238496-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m03 "sudo cat /home/docker/cp-test_ha-238496-m02_ha-238496-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m02:/home/docker/cp-test.txt ha-238496-m04:/home/docker/cp-test_ha-238496-m02_ha-238496-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m04 "sudo cat /home/docker/cp-test_ha-238496-m02_ha-238496-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp testdata/cp-test.txt ha-238496-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2298622597/001/cp-test_ha-238496-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m03:/home/docker/cp-test.txt ha-238496:/home/docker/cp-test_ha-238496-m03_ha-238496.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496 "sudo cat /home/docker/cp-test_ha-238496-m03_ha-238496.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m03:/home/docker/cp-test.txt ha-238496-m02:/home/docker/cp-test_ha-238496-m03_ha-238496-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m02 "sudo cat /home/docker/cp-test_ha-238496-m03_ha-238496-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m03:/home/docker/cp-test.txt ha-238496-m04:/home/docker/cp-test_ha-238496-m03_ha-238496-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m04 "sudo cat /home/docker/cp-test_ha-238496-m03_ha-238496-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp testdata/cp-test.txt ha-238496-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2298622597/001/cp-test_ha-238496-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m04:/home/docker/cp-test.txt ha-238496:/home/docker/cp-test_ha-238496-m04_ha-238496.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496 "sudo cat /home/docker/cp-test_ha-238496-m04_ha-238496.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m04:/home/docker/cp-test.txt ha-238496-m02:/home/docker/cp-test_ha-238496-m04_ha-238496-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m02 "sudo cat /home/docker/cp-test_ha-238496-m04_ha-238496-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 cp ha-238496-m04:/home/docker/cp-test.txt ha-238496-m03:/home/docker/cp-test_ha-238496-m04_ha-238496-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 ssh -n ha-238496-m03 "sudo cat /home/docker/cp-test_ha-238496-m04_ha-238496-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-238496 node stop m02 -v=7 --alsologtostderr: (13.302951748s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr: exit status 7 (644.264447ms)

                                                
                                                
-- stdout --
	ha-238496
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-238496-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-238496-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-238496-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 23:21:27.650865   34543 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:21:27.651137   34543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:21:27.651147   34543 out.go:304] Setting ErrFile to fd 2...
	I0729 23:21:27.651152   34543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:21:27.651327   34543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:21:27.651514   34543 out.go:298] Setting JSON to false
	I0729 23:21:27.651541   34543 mustload.go:65] Loading cluster: ha-238496
	I0729 23:21:27.651655   34543 notify.go:220] Checking for updates...
	I0729 23:21:27.652036   34543 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:21:27.652056   34543 status.go:255] checking status of ha-238496 ...
	I0729 23:21:27.652533   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:27.652597   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:27.671728   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
	I0729 23:21:27.672144   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:27.672838   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:27.672866   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:27.673211   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:27.673399   34543 main.go:141] libmachine: (ha-238496) Calling .GetState
	I0729 23:21:27.675090   34543 status.go:330] ha-238496 host status = "Running" (err=<nil>)
	I0729 23:21:27.675108   34543 host.go:66] Checking if "ha-238496" exists ...
	I0729 23:21:27.675531   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:27.675572   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:27.690980   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0729 23:21:27.691516   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:27.692043   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:27.692068   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:27.692409   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:27.692580   34543 main.go:141] libmachine: (ha-238496) Calling .GetIP
	I0729 23:21:27.695877   34543 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:21:27.696306   34543 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:21:27.696339   34543 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:21:27.696531   34543 host.go:66] Checking if "ha-238496" exists ...
	I0729 23:21:27.696851   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:27.696896   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:27.712064   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
	I0729 23:21:27.712475   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:27.713047   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:27.713071   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:27.713391   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:27.713558   34543 main.go:141] libmachine: (ha-238496) Calling .DriverName
	I0729 23:21:27.713828   34543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 23:21:27.713857   34543 main.go:141] libmachine: (ha-238496) Calling .GetSSHHostname
	I0729 23:21:27.716850   34543 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:21:27.717310   34543 main.go:141] libmachine: (ha-238496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:48:55", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:15:30 +0000 UTC Type:0 Mac:52:54:00:4c:48:55 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-238496 Clientid:01:52:54:00:4c:48:55}
	I0729 23:21:27.717331   34543 main.go:141] libmachine: (ha-238496) DBG | domain ha-238496 has defined IP address 192.168.39.113 and MAC address 52:54:00:4c:48:55 in network mk-ha-238496
	I0729 23:21:27.717610   34543 main.go:141] libmachine: (ha-238496) Calling .GetSSHPort
	I0729 23:21:27.717797   34543 main.go:141] libmachine: (ha-238496) Calling .GetSSHKeyPath
	I0729 23:21:27.717948   34543 main.go:141] libmachine: (ha-238496) Calling .GetSSHUsername
	I0729 23:21:27.718110   34543 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496/id_rsa Username:docker}
	I0729 23:21:27.824398   34543 ssh_runner.go:195] Run: systemctl --version
	I0729 23:21:27.831706   34543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:21:27.846580   34543 kubeconfig.go:125] found "ha-238496" server: "https://192.168.39.254:8443"
	I0729 23:21:27.846606   34543 api_server.go:166] Checking apiserver status ...
	I0729 23:21:27.846645   34543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 23:21:27.860877   34543 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1935/cgroup
	W0729 23:21:27.870733   34543 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1935/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 23:21:27.870784   34543 ssh_runner.go:195] Run: ls
	I0729 23:21:27.875312   34543 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 23:21:27.879422   34543 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 23:21:27.879446   34543 status.go:422] ha-238496 apiserver status = Running (err=<nil>)
	I0729 23:21:27.879470   34543 status.go:257] ha-238496 status: &{Name:ha-238496 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 23:21:27.879492   34543 status.go:255] checking status of ha-238496-m02 ...
	I0729 23:21:27.879786   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:27.879825   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:27.898176   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0729 23:21:27.898672   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:27.899136   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:27.899160   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:27.899417   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:27.899587   34543 main.go:141] libmachine: (ha-238496-m02) Calling .GetState
	I0729 23:21:27.901124   34543 status.go:330] ha-238496-m02 host status = "Stopped" (err=<nil>)
	I0729 23:21:27.901136   34543 status.go:343] host is not running, skipping remaining checks
	I0729 23:21:27.901142   34543 status.go:257] ha-238496-m02 status: &{Name:ha-238496-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 23:21:27.901156   34543 status.go:255] checking status of ha-238496-m03 ...
	I0729 23:21:27.901461   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:27.901494   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:27.916416   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0729 23:21:27.916847   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:27.917360   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:27.917385   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:27.917678   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:27.917840   34543 main.go:141] libmachine: (ha-238496-m03) Calling .GetState
	I0729 23:21:27.919499   34543 status.go:330] ha-238496-m03 host status = "Running" (err=<nil>)
	I0729 23:21:27.919515   34543 host.go:66] Checking if "ha-238496-m03" exists ...
	I0729 23:21:27.919849   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:27.919881   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:27.934215   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I0729 23:21:27.934611   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:27.935191   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:27.935218   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:27.935560   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:27.935758   34543 main.go:141] libmachine: (ha-238496-m03) Calling .GetIP
	I0729 23:21:27.938587   34543 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:21:27.939066   34543 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:21:27.939093   34543 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:21:27.939217   34543 host.go:66] Checking if "ha-238496-m03" exists ...
	I0729 23:21:27.939545   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:27.939591   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:27.954871   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I0729 23:21:27.955332   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:27.955804   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:27.955824   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:27.956117   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:27.956286   34543 main.go:141] libmachine: (ha-238496-m03) Calling .DriverName
	I0729 23:21:27.956450   34543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 23:21:27.956472   34543 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHHostname
	I0729 23:21:27.959168   34543 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:21:27.959584   34543 main.go:141] libmachine: (ha-238496-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:73:00", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:18:11 +0000 UTC Type:0 Mac:52:54:00:34:73:00 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-238496-m03 Clientid:01:52:54:00:34:73:00}
	I0729 23:21:27.959614   34543 main.go:141] libmachine: (ha-238496-m03) DBG | domain ha-238496-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:34:73:00 in network mk-ha-238496
	I0729 23:21:27.959767   34543 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHPort
	I0729 23:21:27.959919   34543 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHKeyPath
	I0729 23:21:27.960072   34543 main.go:141] libmachine: (ha-238496-m03) Calling .GetSSHUsername
	I0729 23:21:27.960203   34543 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m03/id_rsa Username:docker}
	I0729 23:21:28.039218   34543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:21:28.056998   34543 kubeconfig.go:125] found "ha-238496" server: "https://192.168.39.254:8443"
	I0729 23:21:28.057030   34543 api_server.go:166] Checking apiserver status ...
	I0729 23:21:28.057067   34543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 23:21:28.072392   34543 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup
	W0729 23:21:28.082730   34543 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 23:21:28.082781   34543 ssh_runner.go:195] Run: ls
	I0729 23:21:28.087800   34543 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 23:21:28.092458   34543 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 23:21:28.092479   34543 status.go:422] ha-238496-m03 apiserver status = Running (err=<nil>)
	I0729 23:21:28.092491   34543 status.go:257] ha-238496-m03 status: &{Name:ha-238496-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 23:21:28.092511   34543 status.go:255] checking status of ha-238496-m04 ...
	I0729 23:21:28.092902   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:28.092943   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:28.110989   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36131
	I0729 23:21:28.111396   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:28.111883   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:28.111902   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:28.112200   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:28.112374   34543 main.go:141] libmachine: (ha-238496-m04) Calling .GetState
	I0729 23:21:28.114038   34543 status.go:330] ha-238496-m04 host status = "Running" (err=<nil>)
	I0729 23:21:28.114055   34543 host.go:66] Checking if "ha-238496-m04" exists ...
	I0729 23:21:28.114349   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:28.114414   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:28.130574   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0729 23:21:28.131039   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:28.131559   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:28.131598   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:28.131952   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:28.132151   34543 main.go:141] libmachine: (ha-238496-m04) Calling .GetIP
	I0729 23:21:28.135073   34543 main.go:141] libmachine: (ha-238496-m04) DBG | domain ha-238496-m04 has defined MAC address 52:54:00:e0:89:bc in network mk-ha-238496
	I0729 23:21:28.135467   34543 main.go:141] libmachine: (ha-238496-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:89:bc", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:20:12 +0000 UTC Type:0 Mac:52:54:00:e0:89:bc Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-238496-m04 Clientid:01:52:54:00:e0:89:bc}
	I0729 23:21:28.135509   34543 main.go:141] libmachine: (ha-238496-m04) DBG | domain ha-238496-m04 has defined IP address 192.168.39.59 and MAC address 52:54:00:e0:89:bc in network mk-ha-238496
	I0729 23:21:28.135645   34543 host.go:66] Checking if "ha-238496-m04" exists ...
	I0729 23:21:28.135938   34543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:21:28.135978   34543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:21:28.150508   34543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I0729 23:21:28.150920   34543 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:21:28.151406   34543 main.go:141] libmachine: Using API Version  1
	I0729 23:21:28.151426   34543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:21:28.151725   34543 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:21:28.151921   34543 main.go:141] libmachine: (ha-238496-m04) Calling .DriverName
	I0729 23:21:28.152104   34543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 23:21:28.152124   34543 main.go:141] libmachine: (ha-238496-m04) Calling .GetSSHHostname
	I0729 23:21:28.154723   34543 main.go:141] libmachine: (ha-238496-m04) DBG | domain ha-238496-m04 has defined MAC address 52:54:00:e0:89:bc in network mk-ha-238496
	I0729 23:21:28.155102   34543 main.go:141] libmachine: (ha-238496-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:89:bc", ip: ""} in network mk-ha-238496: {Iface:virbr1 ExpiryTime:2024-07-30 00:20:12 +0000 UTC Type:0 Mac:52:54:00:e0:89:bc Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-238496-m04 Clientid:01:52:54:00:e0:89:bc}
	I0729 23:21:28.155125   34543 main.go:141] libmachine: (ha-238496-m04) DBG | domain ha-238496-m04 has defined IP address 192.168.39.59 and MAC address 52:54:00:e0:89:bc in network mk-ha-238496
	I0729 23:21:28.155327   34543 main.go:141] libmachine: (ha-238496-m04) Calling .GetSSHPort
	I0729 23:21:28.155504   34543 main.go:141] libmachine: (ha-238496-m04) Calling .GetSSHKeyPath
	I0729 23:21:28.155656   34543 main.go:141] libmachine: (ha-238496-m04) Calling .GetSSHUsername
	I0729 23:21:28.155831   34543 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/ha-238496-m04/id_rsa Username:docker}
	I0729 23:21:28.238125   34543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:21:28.253818   34543 status.go:257] ha-238496-m04 status: &{Name:ha-238496-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 node start m02 -v=7 --alsologtostderr
E0729 23:22:08.071349   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-238496 node start m02 -v=7 --alsologtostderr: (47.312066199s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-238496 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-238496 -v=7 --alsologtostderr
E0729 23:22:50.600740   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-238496 -v=7 --alsologtostderr: (42.379936024s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-238496 --wait=true -v=7 --alsologtostderr
E0729 23:24:24.229125   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0729 23:24:51.912279   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-238496 --wait=true -v=7 --alsologtostderr: (3m25.452938793s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-238496
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-238496 node delete m03 -v=7 --alsologtostderr: (7.505834489s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-238496 stop -v=7 --alsologtostderr: (38.362316479s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr: exit status 7 (98.127191ms)

                                                
                                                
-- stdout --
	ha-238496
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-238496-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-238496-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 23:27:12.351547   36962 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:27:12.351824   36962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:27:12.351834   36962 out.go:304] Setting ErrFile to fd 2...
	I0729 23:27:12.351838   36962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:27:12.352020   36962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:27:12.352189   36962 out.go:298] Setting JSON to false
	I0729 23:27:12.352214   36962 mustload.go:65] Loading cluster: ha-238496
	I0729 23:27:12.352339   36962 notify.go:220] Checking for updates...
	I0729 23:27:12.352735   36962 config.go:182] Loaded profile config "ha-238496": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:27:12.352755   36962 status.go:255] checking status of ha-238496 ...
	I0729 23:27:12.353201   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:27:12.353264   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:27:12.368668   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0729 23:27:12.369059   36962 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:27:12.369618   36962 main.go:141] libmachine: Using API Version  1
	I0729 23:27:12.369652   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:27:12.369931   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:27:12.370138   36962 main.go:141] libmachine: (ha-238496) Calling .GetState
	I0729 23:27:12.371778   36962 status.go:330] ha-238496 host status = "Stopped" (err=<nil>)
	I0729 23:27:12.371795   36962 status.go:343] host is not running, skipping remaining checks
	I0729 23:27:12.371803   36962 status.go:257] ha-238496 status: &{Name:ha-238496 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 23:27:12.371836   36962 status.go:255] checking status of ha-238496-m02 ...
	I0729 23:27:12.372219   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:27:12.372259   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:27:12.386868   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I0729 23:27:12.387296   36962 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:27:12.387767   36962 main.go:141] libmachine: Using API Version  1
	I0729 23:27:12.387788   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:27:12.388067   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:27:12.388249   36962 main.go:141] libmachine: (ha-238496-m02) Calling .GetState
	I0729 23:27:12.389753   36962 status.go:330] ha-238496-m02 host status = "Stopped" (err=<nil>)
	I0729 23:27:12.389767   36962 status.go:343] host is not running, skipping remaining checks
	I0729 23:27:12.389775   36962 status.go:257] ha-238496-m02 status: &{Name:ha-238496-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 23:27:12.389798   36962 status.go:255] checking status of ha-238496-m04 ...
	I0729 23:27:12.390077   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:27:12.390111   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:27:12.404709   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I0729 23:27:12.405139   36962 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:27:12.405576   36962 main.go:141] libmachine: Using API Version  1
	I0729 23:27:12.405597   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:27:12.405879   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:27:12.406056   36962 main.go:141] libmachine: (ha-238496-m04) Calling .GetState
	I0729 23:27:12.407660   36962 status.go:330] ha-238496-m04 host status = "Stopped" (err=<nil>)
	I0729 23:27:12.407674   36962 status.go:343] host is not running, skipping remaining checks
	I0729 23:27:12.407681   36962 status.go:257] ha-238496-m04 status: &{Name:ha-238496-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (161.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-238496 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0729 23:27:50.600235   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:29:13.645231   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:29:24.228411   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-238496 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m40.631343091s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (161.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (88.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-238496 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-238496 --control-plane -v=7 --alsologtostderr: (1m27.523644028s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-238496 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (88.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (49.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-819649 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-819649 --driver=kvm2 : (49.865785838s)
--- PASS: TestImageBuild/serial/Setup (49.87s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-819649
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-819649: (2.690794253s)
--- PASS: TestImageBuild/serial/NormalBuild (2.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-819649
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-819649: (1.143785872s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-819649
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-819649
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-836803 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0729 23:32:50.600284   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-836803 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m8.402160955s)
--- PASS: TestJSONOutput/start/Command (68.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-836803 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-836803 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-836803 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-836803 --output=json --user=testUser: (13.340632155s)
--- PASS: TestJSONOutput/stop/Command (13.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-092954 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-092954 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.581046ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a1866bd0-98a3-48ed-b3a2-85b6c296e1ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-092954] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"65de25b3-4072-4943-9a91-6ddd9495a832","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19347"}}
	{"specversion":"1.0","id":"d2776e41-a5d0-4250-b7b8-5d6141ea6925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"60189470-5d63-42e0-8ddb-09024796be73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig"}}
	{"specversion":"1.0","id":"cd3a5e03-9714-4202-b5c8-a77b1605a082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube"}}
	{"specversion":"1.0","id":"248a83c7-a578-4dbf-808a-b310e56b5e44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1dda67b2-2614-4455-b47b-ddd44c780dfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b7e6fd01-57b2-4e3f-ae23-8e9d1354eda0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-092954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-092954
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (111.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-409969 --driver=kvm2 
E0729 23:34:24.231529   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-409969 --driver=kvm2 : (57.293869454s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-412824 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-412824 --driver=kvm2 : (51.134810585s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-409969
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-412824
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-412824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-412824
helpers_test.go:175: Cleaning up "first-409969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-409969
--- PASS: TestMinikubeProfile (111.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (36.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-021634 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0729 23:35:47.273738   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-021634 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (35.272338391s)
--- PASS: TestMountStart/serial/StartWithMountFirst (36.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-021634 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-021634 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (33.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-038214 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-038214 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (32.705042252s)
--- PASS: TestMountStart/serial/StartWithMountSecond (33.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-038214 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-038214 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.06s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-021634 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-021634 --alsologtostderr -v=5: (1.056189973s)
--- PASS: TestMountStart/serial/DeleteFirst (1.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-038214 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-038214 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-038214
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-038214: (2.268837698s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-038214
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-038214: (26.118187196s)
--- PASS: TestMountStart/serial/RestartStopped (27.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-038214 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-038214 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (142.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-118557 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0729 23:37:50.600112   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:39:24.229047   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-118557 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m22.363314197s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (142.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-118557 -- rollout status deployment/busybox: (3.783267304s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-chmvt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-fngjf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-chmvt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-fngjf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-chmvt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-fngjf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-chmvt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-chmvt -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-fngjf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-118557 -- exec busybox-fc5497c4f-fngjf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (61.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-118557 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-118557 -v 3 --alsologtostderr: (1m0.708947474s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (61.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-118557 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp testdata/cp-test.txt multinode-118557:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3810148294/001/cp-test_multinode-118557.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557:/home/docker/cp-test.txt multinode-118557-m02:/home/docker/cp-test_multinode-118557_multinode-118557-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m02 "sudo cat /home/docker/cp-test_multinode-118557_multinode-118557-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557:/home/docker/cp-test.txt multinode-118557-m03:/home/docker/cp-test_multinode-118557_multinode-118557-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m03 "sudo cat /home/docker/cp-test_multinode-118557_multinode-118557-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp testdata/cp-test.txt multinode-118557-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3810148294/001/cp-test_multinode-118557-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557-m02:/home/docker/cp-test.txt multinode-118557:/home/docker/cp-test_multinode-118557-m02_multinode-118557.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557 "sudo cat /home/docker/cp-test_multinode-118557-m02_multinode-118557.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557-m02:/home/docker/cp-test.txt multinode-118557-m03:/home/docker/cp-test_multinode-118557-m02_multinode-118557-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m03 "sudo cat /home/docker/cp-test_multinode-118557-m02_multinode-118557-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp testdata/cp-test.txt multinode-118557-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3810148294/001/cp-test_multinode-118557-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557-m03:/home/docker/cp-test.txt multinode-118557:/home/docker/cp-test_multinode-118557-m03_multinode-118557.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557 "sudo cat /home/docker/cp-test_multinode-118557-m03_multinode-118557.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 cp multinode-118557-m03:/home/docker/cp-test.txt multinode-118557-m02:/home/docker/cp-test_multinode-118557-m03_multinode-118557-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 ssh -n multinode-118557-m02 "sudo cat /home/docker/cp-test_multinode-118557-m03_multinode-118557-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-118557 node stop m03: (2.565217219s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-118557 status: exit status 7 (409.744928ms)

                                                
                                                
-- stdout --
	multinode-118557
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-118557-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-118557-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-118557 status --alsologtostderr: exit status 7 (415.889898ms)

                                                
                                                
-- stdout --
	multinode-118557
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-118557-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-118557-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 23:41:02.862030   45579 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:41:02.862144   45579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:41:02.862153   45579 out.go:304] Setting ErrFile to fd 2...
	I0729 23:41:02.862158   45579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:41:02.862369   45579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:41:02.862575   45579 out.go:298] Setting JSON to false
	I0729 23:41:02.862603   45579 mustload.go:65] Loading cluster: multinode-118557
	I0729 23:41:02.862909   45579 notify.go:220] Checking for updates...
	I0729 23:41:02.863830   45579 config.go:182] Loaded profile config "multinode-118557": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:41:02.863887   45579 status.go:255] checking status of multinode-118557 ...
	I0729 23:41:02.864726   45579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:41:02.864776   45579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:41:02.880695   45579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0729 23:41:02.881139   45579 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:41:02.881763   45579 main.go:141] libmachine: Using API Version  1
	I0729 23:41:02.881825   45579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:41:02.882214   45579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:41:02.882453   45579 main.go:141] libmachine: (multinode-118557) Calling .GetState
	I0729 23:41:02.884075   45579 status.go:330] multinode-118557 host status = "Running" (err=<nil>)
	I0729 23:41:02.884093   45579 host.go:66] Checking if "multinode-118557" exists ...
	I0729 23:41:02.884380   45579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:41:02.884413   45579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:41:02.899427   45579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0729 23:41:02.899811   45579 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:41:02.900261   45579 main.go:141] libmachine: Using API Version  1
	I0729 23:41:02.900290   45579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:41:02.900576   45579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:41:02.900765   45579 main.go:141] libmachine: (multinode-118557) Calling .GetIP
	I0729 23:41:02.903531   45579 main.go:141] libmachine: (multinode-118557) DBG | domain multinode-118557 has defined MAC address 52:54:00:98:f9:d5 in network mk-multinode-118557
	I0729 23:41:02.903920   45579 main.go:141] libmachine: (multinode-118557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f9:d5", ip: ""} in network mk-multinode-118557: {Iface:virbr1 ExpiryTime:2024-07-30 00:37:37 +0000 UTC Type:0 Mac:52:54:00:98:f9:d5 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:multinode-118557 Clientid:01:52:54:00:98:f9:d5}
	I0729 23:41:02.903967   45579 main.go:141] libmachine: (multinode-118557) DBG | domain multinode-118557 has defined IP address 192.168.39.112 and MAC address 52:54:00:98:f9:d5 in network mk-multinode-118557
	I0729 23:41:02.904104   45579 host.go:66] Checking if "multinode-118557" exists ...
	I0729 23:41:02.904386   45579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:41:02.904421   45579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:41:02.919713   45579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0729 23:41:02.920079   45579 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:41:02.920518   45579 main.go:141] libmachine: Using API Version  1
	I0729 23:41:02.920540   45579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:41:02.920825   45579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:41:02.920997   45579 main.go:141] libmachine: (multinode-118557) Calling .DriverName
	I0729 23:41:02.921166   45579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 23:41:02.921193   45579 main.go:141] libmachine: (multinode-118557) Calling .GetSSHHostname
	I0729 23:41:02.923873   45579 main.go:141] libmachine: (multinode-118557) DBG | domain multinode-118557 has defined MAC address 52:54:00:98:f9:d5 in network mk-multinode-118557
	I0729 23:41:02.924302   45579 main.go:141] libmachine: (multinode-118557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f9:d5", ip: ""} in network mk-multinode-118557: {Iface:virbr1 ExpiryTime:2024-07-30 00:37:37 +0000 UTC Type:0 Mac:52:54:00:98:f9:d5 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:multinode-118557 Clientid:01:52:54:00:98:f9:d5}
	I0729 23:41:02.924331   45579 main.go:141] libmachine: (multinode-118557) DBG | domain multinode-118557 has defined IP address 192.168.39.112 and MAC address 52:54:00:98:f9:d5 in network mk-multinode-118557
	I0729 23:41:02.924594   45579 main.go:141] libmachine: (multinode-118557) Calling .GetSSHPort
	I0729 23:41:02.924772   45579 main.go:141] libmachine: (multinode-118557) Calling .GetSSHKeyPath
	I0729 23:41:02.924908   45579 main.go:141] libmachine: (multinode-118557) Calling .GetSSHUsername
	I0729 23:41:02.925033   45579 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/multinode-118557/id_rsa Username:docker}
	I0729 23:41:03.006796   45579 ssh_runner.go:195] Run: systemctl --version
	I0729 23:41:03.013284   45579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:41:03.030435   45579 kubeconfig.go:125] found "multinode-118557" server: "https://192.168.39.112:8443"
	I0729 23:41:03.030459   45579 api_server.go:166] Checking apiserver status ...
	I0729 23:41:03.030491   45579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 23:41:03.046893   45579 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1903/cgroup
	W0729 23:41:03.058709   45579 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1903/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 23:41:03.058762   45579 ssh_runner.go:195] Run: ls
	I0729 23:41:03.063803   45579 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0729 23:41:03.068036   45579 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0729 23:41:03.068060   45579 status.go:422] multinode-118557 apiserver status = Running (err=<nil>)
	I0729 23:41:03.068069   45579 status.go:257] multinode-118557 status: &{Name:multinode-118557 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 23:41:03.068089   45579 status.go:255] checking status of multinode-118557-m02 ...
	I0729 23:41:03.068369   45579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:41:03.068417   45579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:41:03.083395   45579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0729 23:41:03.083855   45579 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:41:03.084309   45579 main.go:141] libmachine: Using API Version  1
	I0729 23:41:03.084329   45579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:41:03.084662   45579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:41:03.084827   45579 main.go:141] libmachine: (multinode-118557-m02) Calling .GetState
	I0729 23:41:03.086359   45579 status.go:330] multinode-118557-m02 host status = "Running" (err=<nil>)
	I0729 23:41:03.086375   45579 host.go:66] Checking if "multinode-118557-m02" exists ...
	I0729 23:41:03.086659   45579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:41:03.086688   45579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:41:03.101279   45579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I0729 23:41:03.101649   45579 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:41:03.102072   45579 main.go:141] libmachine: Using API Version  1
	I0729 23:41:03.102100   45579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:41:03.102376   45579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:41:03.102544   45579 main.go:141] libmachine: (multinode-118557-m02) Calling .GetIP
	I0729 23:41:03.105089   45579 main.go:141] libmachine: (multinode-118557-m02) DBG | domain multinode-118557-m02 has defined MAC address 52:54:00:4a:0c:78 in network mk-multinode-118557
	I0729 23:41:03.105472   45579 main.go:141] libmachine: (multinode-118557-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:0c:78", ip: ""} in network mk-multinode-118557: {Iface:virbr1 ExpiryTime:2024-07-30 00:39:00 +0000 UTC Type:0 Mac:52:54:00:4a:0c:78 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-118557-m02 Clientid:01:52:54:00:4a:0c:78}
	I0729 23:41:03.105498   45579 main.go:141] libmachine: (multinode-118557-m02) DBG | domain multinode-118557-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:4a:0c:78 in network mk-multinode-118557
	I0729 23:41:03.105617   45579 host.go:66] Checking if "multinode-118557-m02" exists ...
	I0729 23:41:03.105913   45579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:41:03.105944   45579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:41:03.120447   45579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I0729 23:41:03.120820   45579 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:41:03.121260   45579 main.go:141] libmachine: Using API Version  1
	I0729 23:41:03.121281   45579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:41:03.121564   45579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:41:03.121777   45579 main.go:141] libmachine: (multinode-118557-m02) Calling .DriverName
	I0729 23:41:03.121992   45579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 23:41:03.122011   45579 main.go:141] libmachine: (multinode-118557-m02) Calling .GetSSHHostname
	I0729 23:41:03.124488   45579 main.go:141] libmachine: (multinode-118557-m02) DBG | domain multinode-118557-m02 has defined MAC address 52:54:00:4a:0c:78 in network mk-multinode-118557
	I0729 23:41:03.124858   45579 main.go:141] libmachine: (multinode-118557-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:0c:78", ip: ""} in network mk-multinode-118557: {Iface:virbr1 ExpiryTime:2024-07-30 00:39:00 +0000 UTC Type:0 Mac:52:54:00:4a:0c:78 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-118557-m02 Clientid:01:52:54:00:4a:0c:78}
	I0729 23:41:03.124886   45579 main.go:141] libmachine: (multinode-118557-m02) DBG | domain multinode-118557-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:4a:0c:78 in network mk-multinode-118557
	I0729 23:41:03.125031   45579 main.go:141] libmachine: (multinode-118557-m02) Calling .GetSSHPort
	I0729 23:41:03.125166   45579 main.go:141] libmachine: (multinode-118557-m02) Calling .GetSSHKeyPath
	I0729 23:41:03.125322   45579 main.go:141] libmachine: (multinode-118557-m02) Calling .GetSSHUsername
	I0729 23:41:03.125439   45579 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19347-12221/.minikube/machines/multinode-118557-m02/id_rsa Username:docker}
	I0729 23:41:03.202854   45579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 23:41:03.217983   45579 status.go:257] multinode-118557-m02 status: &{Name:multinode-118557-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 23:41:03.218032   45579 status.go:255] checking status of multinode-118557-m03 ...
	I0729 23:41:03.218414   45579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:41:03.218457   45579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:41:03.235218   45579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0729 23:41:03.235595   45579 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:41:03.236093   45579 main.go:141] libmachine: Using API Version  1
	I0729 23:41:03.236113   45579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:41:03.236406   45579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:41:03.236584   45579 main.go:141] libmachine: (multinode-118557-m03) Calling .GetState
	I0729 23:41:03.237864   45579 status.go:330] multinode-118557-m03 host status = "Stopped" (err=<nil>)
	I0729 23:41:03.237883   45579 status.go:343] host is not running, skipping remaining checks
	I0729 23:41:03.237888   45579 status.go:257] multinode-118557-m03 status: &{Name:multinode-118557-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-118557 node start m03 -v=7 --alsologtostderr: (42.872377831s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (43.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (190.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-118557
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-118557
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-118557: (27.370186952s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-118557 --wait=true -v=8 --alsologtostderr
E0729 23:42:50.600211   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0729 23:44:24.228353   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-118557 --wait=true -v=8 --alsologtostderr: (2m42.580912699s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-118557
--- PASS: TestMultiNode/serial/RestartKeepsNodes (190.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-118557 node delete m03: (1.814118187s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-118557 stop: (25.636957027s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-118557 status: exit status 7 (81.36697ms)

                                                
                                                
-- stdout --
	multinode-118557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-118557-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-118557 status --alsologtostderr: exit status 7 (80.734203ms)

                                                
                                                
-- stdout --
	multinode-118557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-118557-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 23:45:24.818340   47383 out.go:291] Setting OutFile to fd 1 ...
	I0729 23:45:24.818447   47383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:45:24.818458   47383 out.go:304] Setting ErrFile to fd 2...
	I0729 23:45:24.818463   47383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 23:45:24.818628   47383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-12221/.minikube/bin
	I0729 23:45:24.818821   47383 out.go:298] Setting JSON to false
	I0729 23:45:24.818846   47383 mustload.go:65] Loading cluster: multinode-118557
	I0729 23:45:24.818951   47383 notify.go:220] Checking for updates...
	I0729 23:45:24.819259   47383 config.go:182] Loaded profile config "multinode-118557": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 23:45:24.819274   47383 status.go:255] checking status of multinode-118557 ...
	I0729 23:45:24.819753   47383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:45:24.819822   47383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:45:24.840997   47383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43671
	I0729 23:45:24.841402   47383 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:45:24.841927   47383 main.go:141] libmachine: Using API Version  1
	I0729 23:45:24.841961   47383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:45:24.842313   47383 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:45:24.842497   47383 main.go:141] libmachine: (multinode-118557) Calling .GetState
	I0729 23:45:24.844079   47383 status.go:330] multinode-118557 host status = "Stopped" (err=<nil>)
	I0729 23:45:24.844093   47383 status.go:343] host is not running, skipping remaining checks
	I0729 23:45:24.844099   47383 status.go:257] multinode-118557 status: &{Name:multinode-118557 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 23:45:24.844115   47383 status.go:255] checking status of multinode-118557-m02 ...
	I0729 23:45:24.844385   47383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 23:45:24.844425   47383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 23:45:24.858370   47383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I0729 23:45:24.858686   47383 main.go:141] libmachine: () Calling .GetVersion
	I0729 23:45:24.859094   47383 main.go:141] libmachine: Using API Version  1
	I0729 23:45:24.859121   47383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 23:45:24.859393   47383 main.go:141] libmachine: () Calling .GetMachineName
	I0729 23:45:24.859703   47383 main.go:141] libmachine: (multinode-118557-m02) Calling .GetState
	I0729 23:45:24.861000   47383 status.go:330] multinode-118557-m02 host status = "Stopped" (err=<nil>)
	I0729 23:45:24.861013   47383 status.go:343] host is not running, skipping remaining checks
	I0729 23:45:24.861019   47383 status.go:257] multinode-118557-m02 status: &{Name:multinode-118557-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (123.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-118557 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0729 23:45:53.646330   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-118557 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m2.836736425s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-118557 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (123.34s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-118557
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-118557-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-118557-m02 --driver=kvm2 : exit status 14 (58.335789ms)

                                                
                                                
-- stdout --
	* [multinode-118557-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19347
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-118557-m02' is duplicated with machine name 'multinode-118557-m02' in profile 'multinode-118557'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-118557-m03 --driver=kvm2 
E0729 23:47:50.600055   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-118557-m03 --driver=kvm2 : (48.566705415s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-118557
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-118557: exit status 80 (211.293905ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-118557 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-118557-m03 already exists in multinode-118557-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-118557-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.86s)

                                                
                                    
x
+
TestPreload (182.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-781122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0729 23:49:24.228163   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-781122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m43.038389059s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-781122 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-781122 image pull gcr.io/k8s-minikube/busybox: (2.06251638s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-781122
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-781122: (13.293399227s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-781122 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-781122 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m3.10710945s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-781122 image list
helpers_test.go:175: Cleaning up "test-preload-781122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-781122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-781122: (1.053335525s)
--- PASS: TestPreload (182.74s)

                                                
                                    
x
+
TestScheduledStopUnix (122.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-327282 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-327282 --memory=2048 --driver=kvm2 : (50.734655678s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-327282 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-327282 -n scheduled-stop-327282
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-327282 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-327282 --cancel-scheduled
E0729 23:52:27.274873   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-327282 -n scheduled-stop-327282
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-327282
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-327282 --schedule 15s
E0729 23:52:50.600786   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-327282
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-327282: exit status 7 (64.595165ms)

                                                
                                                
-- stdout --
	scheduled-stop-327282
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-327282 -n scheduled-stop-327282
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-327282 -n scheduled-stop-327282: exit status 7 (63.362409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-327282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-327282
--- PASS: TestScheduledStopUnix (122.28s)

                                                
                                    
x
+
TestSkaffold (141.33s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1755376770 version
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-387359 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-387359 --memory=2600 --driver=kvm2 : (53.567480185s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1755376770 run --minikube-profile skaffold-387359 --kube-context skaffold-387359 --status-check=true --port-forward=false --interactive=false
E0729 23:54:24.228193   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1755376770 run --minikube-profile skaffold-387359 --kube-context skaffold-387359 --status-check=true --port-forward=false --interactive=false: (1m12.643646581s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6bc5854c8f-976jv" [f634edef-ed54-4911-b3ab-f854cfe53861] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003519728s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-76bb588dfc-4sxpn" [ac2a72c7-826b-4812-8247-52b240365812] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003326584s
helpers_test.go:175: Cleaning up "skaffold-387359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-387359
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-387359: (1.167596795s)
--- PASS: TestSkaffold (141.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (117.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0730 00:00:33.982710   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:33.988080   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:33.998178   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:34.018305   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:34.058510   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:34.139492   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:34.300013   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:34.620686   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.147152874 start -p running-upgrade-425969 --memory=2200 --vm-driver=kvm2 
E0730 00:00:35.261304   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:36.542280   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:00:39.103490   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.147152874 start -p running-upgrade-425969 --memory=2200 --vm-driver=kvm2 : (1m2.751396388s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-425969 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-425969 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (51.143044195s)
helpers_test.go:175: Cleaning up "running-upgrade-425969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-425969
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-425969: (1.1790322s)
--- PASS: TestRunningBinaryUpgrade (117.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (228.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-604507 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-604507 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m27.870647285s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-604507
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-604507: (12.567772431s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-604507 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-604507 status --format={{.Host}}: exit status 7 (64.343635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-604507 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-604507 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (1m8.946555758s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-604507 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-604507 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-604507 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (76.981394ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-604507] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19347
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-604507
	    minikube start -p kubernetes-upgrade-604507 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6045072 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-604507 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-604507 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
E0730 00:00:54.464466   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:01:14.945556   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-604507 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (57.885381941s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-604507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-604507
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-604507: (1.291040482s)
--- PASS: TestKubernetesUpgrade (228.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (162.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.934287837 start -p stopped-upgrade-270299 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.934287837 start -p stopped-upgrade-270299 --memory=2200 --vm-driver=kvm2 : (1m30.989618736s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.934287837 -p stopped-upgrade-270299 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.934287837 -p stopped-upgrade-270299 stop: (12.850224404s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-270299 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0730 00:00:44.224150   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-270299 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (59.006784303s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (162.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-270299
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-270299: (1.1788019s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-401586 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-401586 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (59.752776ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-401586] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19347
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19347-12221/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-12221/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (60.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-401586 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-401586 --driver=kvm2 : (1m0.498236772s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-401586 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (60.83s)

                                                
                                    
x
+
TestPause/serial/Start (121.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-147297 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0730 00:01:55.906261   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-147297 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m1.521602427s)
--- PASS: TestPause/serial/Start (121.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (112.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0730 00:02:33.646603   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m52.248309027s)
--- PASS: TestNetworkPlugins/group/auto/Start (112.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (64.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-401586 --no-kubernetes --driver=kvm2 
E0730 00:02:50.600079   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0730 00:03:16.471701   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:16.476988   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:16.487247   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:16.507541   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:16.547877   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:16.628221   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:16.788676   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:17.109271   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:17.750221   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:17.827467   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:03:19.030928   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:21.591945   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:26.712199   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:03:36.953388   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-401586 --no-kubernetes --driver=kvm2 : (1m3.193007503s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-401586 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-401586 status -o json: exit status 2 (235.170447ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-401586","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-401586
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (64.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m32.179232432s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (65.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-147297 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-147297 --alsologtostderr -v=1 --driver=kvm2 : (1m5.432452943s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (65.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (53.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-401586 --no-kubernetes --driver=kvm2 
E0730 00:03:57.434390   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-401586 --no-kubernetes --driver=kvm2 : (53.85075787s)
--- PASS: TestNoKubernetes/serial/Start (53.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mzlbd" [52e69098-dafb-4888-80da-2edc77ebf8f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0730 00:04:24.228797   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-mzlbd" [52e69098-dafb-4888-80da-2edc77ebf8f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004757602s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-401586 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-401586 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.992859ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-401586
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-401586: (2.433802913s)
--- PASS: TestNoKubernetes/serial/Stop (2.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (27.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-401586 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-401586 --driver=kvm2 : (27.054161002s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (27.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (117.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m57.539236424s)
--- PASS: TestNetworkPlugins/group/calico/Start (117.54s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-147297 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-147297 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-147297 --output=json --layout=cluster: exit status 2 (253.584145ms)

                                                
                                                
-- stdout --
	{"Name":"pause-147297","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-147297","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-147297 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.68s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-147297 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.68s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-147297 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-147297 --alsologtostderr -v=5: (1.048834434s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.384854554s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (118.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m58.173682152s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (118.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cp65q" [6c435b6d-4baf-4fec-8146-c6b3ad31bc6e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004264438s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-401586 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-401586 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.807794ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (159.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m39.113892455s)
--- PASS: TestNetworkPlugins/group/false/Start (159.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tscvk" [6b6feaca-4956-4c2b-99d1-0a07a39474e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tscvk" [6b6feaca-4956-4c2b-99d1-0a07a39474e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003848281s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (118.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0730 00:06:00.315857   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:06:01.668638   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m58.817180522s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (118.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q85qk" [4253906a-fe13-4384-819e-0caafc4687ed] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005713853s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6jcgd" [22ef969a-4c10-4587-a851-794a39480d94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6jcgd" [22ef969a-4c10-4587-a851-794a39480d94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00529362s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-s7jtq" [4fc0fc78-7925-4108-b5e4-bbfb62e9c8b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-s7jtq" [4fc0fc78-7925-4108-b5e4-bbfb62e9c8b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005112739s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m19.853068781s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (130.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (2m10.342608579s)
--- PASS: TestNetworkPlugins/group/bridge/Start (130.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dm2hd" [3e443371-02e3-4708-9a06-c77a6a9a8051] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0730 00:07:50.600794   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-dm2hd" [3e443371-02e3-4708-9a06-c77a6a9a8051] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004887961s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5xjm7" [4501f1cf-b90e-4b24-b610-6362a99b66d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5xjm7" [4501f1cf-b90e-4b24-b610-6362a99b66d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.004443217s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (112.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0730 00:08:16.472124   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-594220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m52.585156927s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (112.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (201.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-088514 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0730 00:08:44.156653   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-088514 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (3m21.190683754s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (201.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gwlq6" [6a77359b-9fdb-49dc-97eb-276f739d3b39] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004768147s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-n7vg5" [95852a6c-bd69-4294-9465-795716151086] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-n7vg5" [95852a6c-bd69-4294-9465-795716151086] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004813408s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (94.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-670510 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0
E0730 00:09:22.662307   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:22.667607   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:22.677896   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:22.698239   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:22.738559   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:22.819551   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:22.979875   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:23.300915   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:23.941212   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:24.228502   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0730 00:09:25.221342   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:27.782135   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:09:32.902786   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-670510 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0: (1m34.604563151s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (94.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-v9xwx" [c72b071f-5937-4054-9bd5-5d66590a8596] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0730 00:09:43.143916   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-v9xwx" [c72b071f-5937-4054-9bd5-5d66590a8596] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004599974s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-594220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-594220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qmjs9" [a6ce6e28-90fa-41de-9349-f792243e25fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qmjs9" [a6ce6e28-90fa-41de-9349-f792243e25fb] Running
E0730 00:10:14.049334   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:14.370033   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:15.011147   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:16.291927   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:18.852976   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.006825685s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (111.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-158591 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
E0730 00:10:13.732095   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:13.737383   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:13.747680   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:13.767974   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:13.808312   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:10:13.888687   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-158591 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (1m51.243171983s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (111.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-594220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-594220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-463831 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
E0730 00:10:44.584356   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-463831 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (1m18.187962514s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-670510 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [351f56bf-626a-4295-9a3b-3f953db3b304] Pending
E0730 00:10:54.695080   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
helpers_test.go:344: "busybox" [351f56bf-626a-4295-9a3b-3f953db3b304] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [351f56bf-626a-4295-9a3b-3f953db3b304] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005043425s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-670510 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-670510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-670510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.132038401s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-670510 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-670510 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-670510 --alsologtostderr -v=3: (13.647755377s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-670510 -n no-preload-670510
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-670510 -n no-preload-670510: exit status 7 (68.110311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-670510 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (313.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-670510 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0
E0730 00:11:35.656144   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-670510 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0: (5m13.144863614s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-670510 -n no-preload-670510
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (313.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-088514 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6720b155-380c-456d-bb56-36e8ff5c130b] Pending
E0730 00:11:48.772695   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:11:48.777997   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:11:48.788484   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:11:48.808797   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:11:48.849095   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:11:48.929381   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:11:49.089722   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
helpers_test.go:344: "busybox" [6720b155-380c-456d-bb56-36e8ff5c130b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0730 00:11:49.409932   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:11:50.050233   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:11:51.331171   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
helpers_test.go:344: "busybox" [6720b155-380c-456d-bb56-36e8ff5c130b] Running
E0730 00:11:53.891436   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003801161s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-088514 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-463831 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2eb130fa-7bd0-4752-9d0b-ef1469019395] Pending
helpers_test.go:344: "busybox" [2eb130fa-7bd0-4752-9d0b-ef1469019395] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2eb130fa-7bd0-4752-9d0b-ef1469019395] Running
E0730 00:11:59.011712   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004988461s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-463831 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-088514 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-088514 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-088514 --alsologtostderr -v=3
E0730 00:12:00.708008   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:00.713369   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:00.724281   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:00.744878   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:00.784988   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:00.865347   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:01.026500   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:01.346880   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-088514 --alsologtostderr -v=3: (13.335955632s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-158591 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22ffff77-fa87-4102-aa67-ff08edabf245] Pending
E0730 00:12:01.987027   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
helpers_test.go:344: "busybox" [22ffff77-fa87-4102-aa67-ff08edabf245] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0730 00:12:03.267322   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
helpers_test.go:344: "busybox" [22ffff77-fa87-4102-aa67-ff08edabf245] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004313174s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-158591 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-463831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-463831 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-463831 --alsologtostderr -v=3
E0730 00:12:05.828362   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:06.505021   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:12:09.252615   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-463831 --alsologtostderr -v=3: (13.336594406s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-158591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0730 00:12:10.948549   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-158591 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-158591 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-158591 --alsologtostderr -v=3: (12.642007454s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-088514 -n old-k8s-version-088514
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-088514 -n old-k8s-version-088514: exit status 7 (76.743914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-088514 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (393.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-088514 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-088514 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m33.041280957s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-088514 -n old-k8s-version-088514
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (393.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831: exit status 7 (58.610137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-463831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (334.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-463831 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
E0730 00:12:21.189506   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-463831 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (5m34.66222717s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (334.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-158591 -n embed-certs-158591
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-158591 -n embed-certs-158591: exit status 7 (64.021228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-158591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-158591 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
E0730 00:12:29.733265   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:12:41.670361   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:12:47.262775   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:47.267888   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:47.279015   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:47.299515   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:47.339658   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:47.420276   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:47.581288   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:47.901619   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:48.542103   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:49.822473   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:50.600087   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0730 00:12:52.382994   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:55.922056   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:55.927288   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:55.937553   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:55.957874   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:55.998766   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:56.079029   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:56.239480   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:56.560470   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:57.201023   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:12:57.503899   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:12:57.577304   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:12:58.481154   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:13:01.041804   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:13:06.162310   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:13:07.744196   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:13:10.693456   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:13:16.403219   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:13:16.471745   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
E0730 00:13:22.631369   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:13:28.224545   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:13:36.883894   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:13:44.394254   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:44.399576   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:44.409877   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:44.430203   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:44.470525   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:44.550857   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:44.711370   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:45.031916   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:45.672075   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:46.952577   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:49.513661   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:13:54.634630   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:14:04.874835   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:14:09.184720   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:14:17.844272   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:14:22.661880   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:14:24.228570   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/functional-652848/client.crt: no such file or directory
E0730 00:14:25.355111   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:14:32.613780   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:14:41.100012   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:41.105332   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:41.115634   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:41.135929   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:41.176230   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:41.256369   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:41.416781   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:41.737298   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:42.378442   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:43.659556   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:44.551764   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:14:46.219986   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:14:50.345749   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/auto-594220/client.crt: no such file or directory
E0730 00:14:51.341039   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:15:01.582070   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:15:06.316155   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:15:07.870909   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:07.876191   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:07.886451   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:07.906758   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:07.947040   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:08.028063   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:08.188448   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:08.509008   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:09.149441   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:10.430018   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:12.990448   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:13.732496   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:15:18.110825   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:22.062882   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:15:28.351215   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:15:31.105859   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:15:33.982446   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:15:39.765064   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
E0730 00:15:41.418461   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kindnet-594220/client.crt: no such file or directory
E0730 00:15:48.831581   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
E0730 00:16:03.023337   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:16:28.236316   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/flannel-594220/client.crt: no such file or directory
E0730 00:16:29.791837   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-158591 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (5m37.95978003s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-158591 -n embed-certs-158591
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-zzphl" [d3b98f40-4caa-4830-bb35-4848a4a2e664] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-zzphl" [d3b98f40-4caa-4830-bb35-4848a4a2e664] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005811173s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-zzphl" [d3b98f40-4caa-4830-bb35-4848a4a2e664] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006125595s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-670510 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-670510 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-670510 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-670510 -n no-preload-670510
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-670510 -n no-preload-670510: exit status 2 (246.72502ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-670510 -n no-preload-670510
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-670510 -n no-preload-670510: exit status 2 (272.566799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-670510 --alsologtostderr -v=1
E0730 00:16:48.772241   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-670510 -n no-preload-670510
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-670510 -n no-preload-670510
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-964570 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-beta.0
E0730 00:16:57.029166   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/skaffold-387359/client.crt: no such file or directory
E0730 00:17:00.708830   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:17:16.454517   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/calico-594220/client.crt: no such file or directory
E0730 00:17:24.943799   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/bridge-594220/client.crt: no such file or directory
E0730 00:17:28.392464   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/custom-flannel-594220/client.crt: no such file or directory
E0730 00:17:47.262441   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
E0730 00:17:50.600779   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/addons-050487/client.crt: no such file or directory
E0730 00:17:51.712222   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/kubenet-594220/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-964570 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-beta.0: (1m4.377905741s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-c89mr" [7682dfd7-be11-430d-8fd8-29dc618b4e06] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-c89mr" [7682dfd7-be11-430d-8fd8-29dc618b4e06] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.003683323s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-964570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0730 00:17:55.922414   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/false-594220/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-964570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047276684s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-964570 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-964570 --alsologtostderr -v=3: (13.366710066s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zfspv" [5434a206-ad52-48c8-a574-9f2bf85ac6f0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005174492s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-c89mr" [7682dfd7-be11-430d-8fd8-29dc618b4e06] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005430211s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-463831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zfspv" [5434a206-ad52-48c8-a574-9f2bf85ac6f0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005062868s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-158591 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964570 -n newest-cni-964570
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964570 -n newest-cni-964570: exit status 7 (64.547852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-964570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-964570 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-964570 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-beta.0: (37.01570707s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964570 -n newest-cni-964570
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-463831 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-463831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831: exit status 2 (271.284896ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831
E0730 00:18:14.946076   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/enable-default-cni-594220/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831: exit status 2 (285.095948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-463831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463831 -n default-k8s-diff-port-463831
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-158591 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-158591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158591 -n embed-certs-158591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158591 -n embed-certs-158591: exit status 2 (258.91279ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-158591 -n embed-certs-158591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-158591 -n embed-certs-158591: exit status 2 (280.973037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-158591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158591 -n embed-certs-158591
E0730 00:18:16.471756   19411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-12221/.minikube/profiles/gvisor-193483/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-158591 -n embed-certs-158591
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7zgtw" [60dcf84d-b827-4c14-9493-68d67541f386] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00445816s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-964570 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-964570 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964570 -n newest-cni-964570
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964570 -n newest-cni-964570: exit status 2 (231.132893ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964570 -n newest-cni-964570
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964570 -n newest-cni-964570: exit status 2 (226.29502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-964570 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964570 -n newest-cni-964570
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964570 -n newest-cni-964570
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7zgtw" [60dcf84d-b827-4c14-9493-68d67541f386] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003772311s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-088514 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-088514 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-088514 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-088514 -n old-k8s-version-088514
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-088514 -n old-k8s-version-088514: exit status 2 (232.150446ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-088514 -n old-k8s-version-088514
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-088514 -n old-k8s-version-088514: exit status 2 (230.844971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-088514 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-088514 -n old-k8s-version-088514
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-088514 -n old-k8s-version-088514
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.38s)

                                                
                                    

Test skip (34/349)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
47 TestAddons/parallel/Olm 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
115 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
193 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
220 TestKicCustomNetwork 0
221 TestKicExistingNetwork 0
222 TestKicCustomSubnet 0
223 TestKicStaticIP 0
255 TestChangeNoneUser 0
258 TestScheduledStopWindows 0
262 TestInsufficientStorage 0
266 TestMissingContainerUpgrade 0
277 TestNetworkPlugins/group/cilium 3.38
283 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-594220 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-594220" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-594220

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-594220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-594220"

                                                
                                                
----------------------- debugLogs end: cilium-594220 [took: 3.225563858s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-594220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-594220
--- SKIP: TestNetworkPlugins/group/cilium (3.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-428106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-428106
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard