Test Report: KVM_Linux_containerd 18171

                    
                      99de8c2f99c92d56089a7f0e4f6f6a405ebd3f59:2024-02-13:33127
                    
                

Test fail (1/318)

Order failed test Duration
39 TestAddons/parallel/Ingress 28.25
x
+
TestAddons/parallel/Ingress (28.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-174699 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-174699 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-174699 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [eb11f745-9bcc-4548-a1db-1da295b3deac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [eb11f745-9bcc-4548-a1db-1da295b3deac] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.007580958s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-174699 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.71
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-174699 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (449.133411ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:00:18.009240   18922 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:00:18.009572   18922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:00:18.009585   18922 out.go:304] Setting ErrFile to fd 2...
	I0213 22:00:18.009614   18922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:00:18.009890   18922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 22:00:18.010257   18922 mustload.go:65] Loading cluster: addons-174699
	I0213 22:00:18.010745   18922 config.go:182] Loaded profile config "addons-174699": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:00:18.010774   18922 addons.go:597] checking whether the cluster is paused
	I0213 22:00:18.010938   18922 config.go:182] Loaded profile config "addons-174699": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:00:18.010955   18922 host.go:66] Checking if "addons-174699" exists ...
	I0213 22:00:18.011567   18922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:00:18.011622   18922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:00:18.026875   18922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0213 22:00:18.027405   18922 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:00:18.028059   18922 main.go:141] libmachine: Using API Version  1
	I0213 22:00:18.028094   18922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:00:18.028500   18922 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:00:18.028742   18922 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 22:00:18.030524   18922 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 22:00:18.030775   18922 ssh_runner.go:195] Run: systemctl --version
	I0213 22:00:18.030809   18922 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 22:00:18.033157   18922 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 22:00:18.033649   18922 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 22:00:18.033680   18922 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 22:00:18.033813   18922 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 22:00:18.033998   18922 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 22:00:18.034171   18922 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 22:00:18.034329   18922 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 22:00:18.151435   18922 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0213 22:00:18.151531   18922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 22:00:18.275150   18922 cri.go:89] found id: "90df8b471379290aecf7d2b73fbe436fef64a134eff693865969e94a1c2af366"
	I0213 22:00:18.275172   18922 cri.go:89] found id: "d916c9a851de29aaeb0552d238dceceec4000b936f8328b1a00ad66e84f52411"
	I0213 22:00:18.275176   18922 cri.go:89] found id: "52e3d91b26c6039831727fbb495719ea748279933d7f168299afa466f891dd52"
	I0213 22:00:18.275180   18922 cri.go:89] found id: "457bcaa2ccc58ebce1609b6e12ee72ab343c99b4a419b7012ef4c560006a2ed7"
	I0213 22:00:18.275184   18922 cri.go:89] found id: "9a7880a52abea1dda3f10ac57e5b2d144e6f9109f12c744ccef943ef0285967a"
	I0213 22:00:18.275188   18922 cri.go:89] found id: "d374e9bc77c767f4b1ff4764b403be1c0915344d215d2c083dd3b01cf158cc62"
	I0213 22:00:18.275191   18922 cri.go:89] found id: "fd9c936bef0e279fd4553e4f4dca4ac10a3a0eb6f4df26fa737c64c00d46c67d"
	I0213 22:00:18.275195   18922 cri.go:89] found id: "04c79b3421a5b9536a0b2b9a5050d45d778df8e684d9013d6f7f835db299d984"
	I0213 22:00:18.275198   18922 cri.go:89] found id: "54988d297bda6eaa6921ce686ad484664c0fc991dbae4bf848e35456d1e7c107"
	I0213 22:00:18.275206   18922 cri.go:89] found id: "bea24f5d7b9e0815b905b46cfd2cab971175bc5fe9e410ef3ec69c3e43133291"
	I0213 22:00:18.275212   18922 cri.go:89] found id: "8b946fd95b7dc49fe98ce01d03f0dc6ad61dfda2e2f5eb405aad902b25075c62"
	I0213 22:00:18.275216   18922 cri.go:89] found id: ""
	I0213 22:00:18.275271   18922 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0213 22:00:18.377695   18922 main.go:141] libmachine: Making call to close driver server
	I0213 22:00:18.377711   18922 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 22:00:18.378067   18922 main.go:141] libmachine: Successfully made call to close driver server
	I0213 22:00:18.378094   18922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 22:00:18.381235   18922 out.go:177] 
	W0213 22:00:18.382742   18922 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-13T22:00:18Z" level=error msg="stat /run/containerd/runc/k8s.io/fcc92f52f70fab07ea16d43bac48ab143dfff75f3cf311fd28ca7180afb25251: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-13T22:00:18Z" level=error msg="stat /run/containerd/runc/k8s.io/fcc92f52f70fab07ea16d43bac48ab143dfff75f3cf311fd28ca7180afb25251: no such file or directory"
	
	W0213 22:00:18.382763   18922 out.go:239] * 
	* 
	W0213 22:00:18.385803   18922 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 22:00:18.387617   18922 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:308: failed to disable ingress-dns addon. args "out/minikube-linux-amd64 -p addons-174699 addons disable ingress-dns --alsologtostderr -v=1" : exit status 11
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-174699 addons disable ingress --alsologtostderr -v=1: (7.812758486s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-174699 -n addons-174699
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-174699 logs -n 25: (1.421418319s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-579598                                                                     | download-only-579598 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-531817                                                                     | download-only-531817 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-860514                                                                     | download-only-860514 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-579598                                                                     | download-only-579598 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-813907 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | binary-mirror-813907                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36691                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-813907                                                                     | binary-mirror-813907 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | addons-174699                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | addons-174699                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-174699 --wait=true                                                                | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-174699 ssh cat                                                                       | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | /opt/local-path-provisioner/pvc-7889aafc-416e-46ca-a309-ac145c00bb50_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-174699 addons disable                                                                | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 22:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | addons-174699                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | -p addons-174699                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-174699 ip                                                                            | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	| addons  | addons-174699 addons disable                                                                | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-174699 addons                                                                        | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | -p addons-174699                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 22:00 UTC |
	|         | addons-174699                                                                               |                      |         |         |                     |                     |
	| addons  | addons-174699 addons disable                                                                | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 22:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-174699 addons                                                                        | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 22:00 UTC | 13 Feb 24 22:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-174699 ssh curl -s                                                                   | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 22:00 UTC | 13 Feb 24 22:00 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-174699 addons                                                                        | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 22:00 UTC | 13 Feb 24 22:00 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-174699 ip                                                                            | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 22:00 UTC | 13 Feb 24 22:00 UTC |
	| addons  | addons-174699 addons disable                                                                | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 22:00 UTC |                     |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-174699 addons disable                                                                | addons-174699        | jenkins | v1.32.0 | 13 Feb 24 22:00 UTC | 13 Feb 24 22:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:47
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:47.215654   16933 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:47.215806   16933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:47.215815   16933 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:47.215819   16933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:47.216013   16933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 21:56:47.216658   16933 out.go:298] Setting JSON to false
	I0213 21:56:47.217526   16933 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":2355,"bootTime":1707859053,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:47.217581   16933 start.go:138] virtualization: kvm guest
	I0213 21:56:47.220087   16933 out.go:177] * [addons-174699] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:47.221611   16933 notify.go:220] Checking for updates...
	I0213 21:56:47.221646   16933 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 21:56:47.223286   16933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:47.225033   16933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	I0213 21:56:47.226698   16933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 21:56:47.228135   16933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 21:56:47.229614   16933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 21:56:47.231276   16933 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 21:56:47.262765   16933 out.go:177] * Using the kvm2 driver based on user configuration
	I0213 21:56:47.264135   16933 start.go:298] selected driver: kvm2
	I0213 21:56:47.264150   16933 start.go:902] validating driver "kvm2" against <nil>
	I0213 21:56:47.264165   16933 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 21:56:47.264887   16933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:47.264961   16933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 21:56:47.279151   16933 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 21:56:47.279194   16933 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 21:56:47.279388   16933 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 21:56:47.279444   16933 cni.go:84] Creating CNI manager for ""
	I0213 21:56:47.279459   16933 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0213 21:56:47.279472   16933 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 21:56:47.279481   16933 start_flags.go:321] config:
	{Name:addons-174699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-174699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:56:47.279626   16933 iso.go:125] acquiring lock: {Name:mke99a7249501a63f2cf8fb971ea34ada8b7e341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:47.281416   16933 out.go:177] * Starting control plane node addons-174699 in cluster addons-174699
	I0213 21:56:47.282896   16933 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0213 21:56:47.282936   16933 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0213 21:56:47.282945   16933 cache.go:56] Caching tarball of preloaded images
	I0213 21:56:47.283021   16933 preload.go:174] Found /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 21:56:47.283031   16933 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0213 21:56:47.283331   16933 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/config.json ...
	I0213 21:56:47.283353   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/config.json: {Name:mk37d67747edef6d56b4c0d6c006416d3f9c35b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:56:47.283495   16933 start.go:365] acquiring machines lock for addons-174699: {Name:mk8651221860d169faf354adc715d2a2c0a34a21 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 21:56:47.283539   16933 start.go:369] acquired machines lock for "addons-174699" in 32.097µs
	I0213 21:56:47.283556   16933 start.go:93] Provisioning new machine with config: &{Name:addons-174699 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-174699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0213 21:56:47.283616   16933 start.go:125] createHost starting for "" (driver="kvm2")
	I0213 21:56:47.286729   16933 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0213 21:56:47.286881   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:56:47.286927   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:56:47.300419   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0213 21:56:47.300880   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:56:47.301420   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:56:47.301437   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:56:47.301801   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:56:47.301951   16933 main.go:141] libmachine: (addons-174699) Calling .GetMachineName
	I0213 21:56:47.302094   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:56:47.302279   16933 start.go:159] libmachine.API.Create for "addons-174699" (driver="kvm2")
	I0213 21:56:47.302309   16933 client.go:168] LocalClient.Create starting
	I0213 21:56:47.302360   16933 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca.pem
	I0213 21:56:47.529120   16933 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/cert.pem
	I0213 21:56:47.701543   16933 main.go:141] libmachine: Running pre-create checks...
	I0213 21:56:47.701573   16933 main.go:141] libmachine: (addons-174699) Calling .PreCreateCheck
	I0213 21:56:47.702080   16933 main.go:141] libmachine: (addons-174699) Calling .GetConfigRaw
	I0213 21:56:47.702491   16933 main.go:141] libmachine: Creating machine...
	I0213 21:56:47.702505   16933 main.go:141] libmachine: (addons-174699) Calling .Create
	I0213 21:56:47.702643   16933 main.go:141] libmachine: (addons-174699) Creating KVM machine...
	I0213 21:56:47.703800   16933 main.go:141] libmachine: (addons-174699) DBG | found existing default KVM network
	I0213 21:56:47.704517   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:47.704370   16955 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I0213 21:56:47.710048   16933 main.go:141] libmachine: (addons-174699) DBG | trying to create private KVM network mk-addons-174699 192.168.39.0/24...
	I0213 21:56:47.778609   16933 main.go:141] libmachine: (addons-174699) Setting up store path in /home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699 ...
	I0213 21:56:47.778643   16933 main.go:141] libmachine: (addons-174699) Building disk image from file:///home/jenkins/minikube-integration/18171-8975/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0213 21:56:47.778651   16933 main.go:141] libmachine: (addons-174699) DBG | private KVM network mk-addons-174699 192.168.39.0/24 created
	I0213 21:56:47.778671   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:47.778554   16955 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 21:56:47.778688   16933 main.go:141] libmachine: (addons-174699) Downloading /home/jenkins/minikube-integration/18171-8975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18171-8975/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0213 21:56:47.997487   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:47.997372   16955 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa...
	I0213 21:56:48.089702   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:48.089565   16955 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/addons-174699.rawdisk...
	I0213 21:56:48.089735   16933 main.go:141] libmachine: (addons-174699) DBG | Writing magic tar header
	I0213 21:56:48.089750   16933 main.go:141] libmachine: (addons-174699) DBG | Writing SSH key tar header
	I0213 21:56:48.089812   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:48.089742   16955 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699 ...
	I0213 21:56:48.089906   16933 main.go:141] libmachine: (addons-174699) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699
	I0213 21:56:48.089929   16933 main.go:141] libmachine: (addons-174699) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8975/.minikube/machines
	I0213 21:56:48.089947   16933 main.go:141] libmachine: (addons-174699) Setting executable bit set on /home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699 (perms=drwx------)
	I0213 21:56:48.089959   16933 main.go:141] libmachine: (addons-174699) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 21:56:48.089974   16933 main.go:141] libmachine: (addons-174699) Setting executable bit set on /home/jenkins/minikube-integration/18171-8975/.minikube/machines (perms=drwxr-xr-x)
	I0213 21:56:48.089984   16933 main.go:141] libmachine: (addons-174699) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8975
	I0213 21:56:48.090001   16933 main.go:141] libmachine: (addons-174699) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0213 21:56:48.090014   16933 main.go:141] libmachine: (addons-174699) DBG | Checking permissions on dir: /home/jenkins
	I0213 21:56:48.090030   16933 main.go:141] libmachine: (addons-174699) Setting executable bit set on /home/jenkins/minikube-integration/18171-8975/.minikube (perms=drwxr-xr-x)
	I0213 21:56:48.090054   16933 main.go:141] libmachine: (addons-174699) DBG | Checking permissions on dir: /home
	I0213 21:56:48.090067   16933 main.go:141] libmachine: (addons-174699) DBG | Skipping /home - not owner
	I0213 21:56:48.090074   16933 main.go:141] libmachine: (addons-174699) Setting executable bit set on /home/jenkins/minikube-integration/18171-8975 (perms=drwxrwxr-x)
	I0213 21:56:48.090083   16933 main.go:141] libmachine: (addons-174699) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0213 21:56:48.090089   16933 main.go:141] libmachine: (addons-174699) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0213 21:56:48.090097   16933 main.go:141] libmachine: (addons-174699) Creating domain...
	I0213 21:56:48.091044   16933 main.go:141] libmachine: (addons-174699) define libvirt domain using xml: 
	I0213 21:56:48.091073   16933 main.go:141] libmachine: (addons-174699) <domain type='kvm'>
	I0213 21:56:48.091086   16933 main.go:141] libmachine: (addons-174699)   <name>addons-174699</name>
	I0213 21:56:48.091106   16933 main.go:141] libmachine: (addons-174699)   <memory unit='MiB'>4000</memory>
	I0213 21:56:48.091122   16933 main.go:141] libmachine: (addons-174699)   <vcpu>2</vcpu>
	I0213 21:56:48.091134   16933 main.go:141] libmachine: (addons-174699)   <features>
	I0213 21:56:48.091148   16933 main.go:141] libmachine: (addons-174699)     <acpi/>
	I0213 21:56:48.091161   16933 main.go:141] libmachine: (addons-174699)     <apic/>
	I0213 21:56:48.091175   16933 main.go:141] libmachine: (addons-174699)     <pae/>
	I0213 21:56:48.091191   16933 main.go:141] libmachine: (addons-174699)     
	I0213 21:56:48.091204   16933 main.go:141] libmachine: (addons-174699)   </features>
	I0213 21:56:48.091215   16933 main.go:141] libmachine: (addons-174699)   <cpu mode='host-passthrough'>
	I0213 21:56:48.091301   16933 main.go:141] libmachine: (addons-174699)   
	I0213 21:56:48.091322   16933 main.go:141] libmachine: (addons-174699)   </cpu>
	I0213 21:56:48.091329   16933 main.go:141] libmachine: (addons-174699)   <os>
	I0213 21:56:48.091335   16933 main.go:141] libmachine: (addons-174699)     <type>hvm</type>
	I0213 21:56:48.091346   16933 main.go:141] libmachine: (addons-174699)     <boot dev='cdrom'/>
	I0213 21:56:48.091351   16933 main.go:141] libmachine: (addons-174699)     <boot dev='hd'/>
	I0213 21:56:48.091357   16933 main.go:141] libmachine: (addons-174699)     <bootmenu enable='no'/>
	I0213 21:56:48.091365   16933 main.go:141] libmachine: (addons-174699)   </os>
	I0213 21:56:48.091374   16933 main.go:141] libmachine: (addons-174699)   <devices>
	I0213 21:56:48.091382   16933 main.go:141] libmachine: (addons-174699)     <disk type='file' device='cdrom'>
	I0213 21:56:48.091394   16933 main.go:141] libmachine: (addons-174699)       <source file='/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/boot2docker.iso'/>
	I0213 21:56:48.091405   16933 main.go:141] libmachine: (addons-174699)       <target dev='hdc' bus='scsi'/>
	I0213 21:56:48.091413   16933 main.go:141] libmachine: (addons-174699)       <readonly/>
	I0213 21:56:48.091420   16933 main.go:141] libmachine: (addons-174699)     </disk>
	I0213 21:56:48.091427   16933 main.go:141] libmachine: (addons-174699)     <disk type='file' device='disk'>
	I0213 21:56:48.091436   16933 main.go:141] libmachine: (addons-174699)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0213 21:56:48.091447   16933 main.go:141] libmachine: (addons-174699)       <source file='/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/addons-174699.rawdisk'/>
	I0213 21:56:48.091454   16933 main.go:141] libmachine: (addons-174699)       <target dev='hda' bus='virtio'/>
	I0213 21:56:48.091462   16933 main.go:141] libmachine: (addons-174699)     </disk>
	I0213 21:56:48.091470   16933 main.go:141] libmachine: (addons-174699)     <interface type='network'>
	I0213 21:56:48.091484   16933 main.go:141] libmachine: (addons-174699)       <source network='mk-addons-174699'/>
	I0213 21:56:48.091492   16933 main.go:141] libmachine: (addons-174699)       <model type='virtio'/>
	I0213 21:56:48.091498   16933 main.go:141] libmachine: (addons-174699)     </interface>
	I0213 21:56:48.091506   16933 main.go:141] libmachine: (addons-174699)     <interface type='network'>
	I0213 21:56:48.091512   16933 main.go:141] libmachine: (addons-174699)       <source network='default'/>
	I0213 21:56:48.091520   16933 main.go:141] libmachine: (addons-174699)       <model type='virtio'/>
	I0213 21:56:48.091525   16933 main.go:141] libmachine: (addons-174699)     </interface>
	I0213 21:56:48.091533   16933 main.go:141] libmachine: (addons-174699)     <serial type='pty'>
	I0213 21:56:48.091541   16933 main.go:141] libmachine: (addons-174699)       <target port='0'/>
	I0213 21:56:48.091547   16933 main.go:141] libmachine: (addons-174699)     </serial>
	I0213 21:56:48.091555   16933 main.go:141] libmachine: (addons-174699)     <console type='pty'>
	I0213 21:56:48.091563   16933 main.go:141] libmachine: (addons-174699)       <target type='serial' port='0'/>
	I0213 21:56:48.091571   16933 main.go:141] libmachine: (addons-174699)     </console>
	I0213 21:56:48.091577   16933 main.go:141] libmachine: (addons-174699)     <rng model='virtio'>
	I0213 21:56:48.091586   16933 main.go:141] libmachine: (addons-174699)       <backend model='random'>/dev/random</backend>
	I0213 21:56:48.091593   16933 main.go:141] libmachine: (addons-174699)     </rng>
	I0213 21:56:48.091601   16933 main.go:141] libmachine: (addons-174699)     
	I0213 21:56:48.091608   16933 main.go:141] libmachine: (addons-174699)     
	I0213 21:56:48.091633   16933 main.go:141] libmachine: (addons-174699)   </devices>
	I0213 21:56:48.091652   16933 main.go:141] libmachine: (addons-174699) </domain>
	I0213 21:56:48.091717   16933 main.go:141] libmachine: (addons-174699) 
	I0213 21:56:48.097829   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:04:35:e8 in network default
	I0213 21:56:48.098383   16933 main.go:141] libmachine: (addons-174699) Ensuring networks are active...
	I0213 21:56:48.098403   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:48.099130   16933 main.go:141] libmachine: (addons-174699) Ensuring network default is active
	I0213 21:56:48.099411   16933 main.go:141] libmachine: (addons-174699) Ensuring network mk-addons-174699 is active
	I0213 21:56:48.099838   16933 main.go:141] libmachine: (addons-174699) Getting domain xml...
	I0213 21:56:48.100607   16933 main.go:141] libmachine: (addons-174699) Creating domain...
	I0213 21:56:49.491042   16933 main.go:141] libmachine: (addons-174699) Waiting to get IP...
	I0213 21:56:49.492010   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:49.492630   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:49.492664   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:49.492577   16955 retry.go:31] will retry after 254.819387ms: waiting for machine to come up
	I0213 21:56:49.749095   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:49.749665   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:49.749696   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:49.749606   16955 retry.go:31] will retry after 315.004908ms: waiting for machine to come up
	I0213 21:56:50.066272   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:50.066643   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:50.066671   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:50.066593   16955 retry.go:31] will retry after 398.159736ms: waiting for machine to come up
	I0213 21:56:50.466160   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:50.466552   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:50.466611   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:50.466523   16955 retry.go:31] will retry after 506.172968ms: waiting for machine to come up
	I0213 21:56:50.974123   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:50.974524   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:50.974555   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:50.974473   16955 retry.go:31] will retry after 619.475058ms: waiting for machine to come up
	I0213 21:56:51.595184   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:51.595594   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:51.595621   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:51.595544   16955 retry.go:31] will retry after 633.942272ms: waiting for machine to come up
	I0213 21:56:52.231318   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:52.231698   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:52.231724   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:52.231646   16955 retry.go:31] will retry after 977.776252ms: waiting for machine to come up
	I0213 21:56:53.211417   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:53.211923   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:53.211957   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:53.211860   16955 retry.go:31] will retry after 1.159469564s: waiting for machine to come up
	I0213 21:56:54.373251   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:54.373775   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:54.373807   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:54.373725   16955 retry.go:31] will retry after 1.822508732s: waiting for machine to come up
	I0213 21:56:56.198761   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:56.199165   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:56.199188   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:56.199113   16955 retry.go:31] will retry after 2.084021956s: waiting for machine to come up
	I0213 21:56:58.284723   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:56:58.285317   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:56:58.285341   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:56:58.285269   16955 retry.go:31] will retry after 2.619422367s: waiting for machine to come up
	I0213 21:57:00.905830   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:00.906343   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:57:00.906370   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:57:00.906298   16955 retry.go:31] will retry after 3.373821631s: waiting for machine to come up
	I0213 21:57:04.281468   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:04.281877   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:57:04.281899   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:57:04.281840   16955 retry.go:31] will retry after 3.564289666s: waiting for machine to come up
	I0213 21:57:07.847953   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:07.848400   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find current IP address of domain addons-174699 in network mk-addons-174699
	I0213 21:57:07.848447   16933 main.go:141] libmachine: (addons-174699) DBG | I0213 21:57:07.848312   16955 retry.go:31] will retry after 5.610760698s: waiting for machine to come up
	I0213 21:57:13.460475   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.460943   16933 main.go:141] libmachine: (addons-174699) Found IP for machine: 192.168.39.71
	I0213 21:57:13.460969   16933 main.go:141] libmachine: (addons-174699) Reserving static IP address...
	I0213 21:57:13.460998   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has current primary IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.461332   16933 main.go:141] libmachine: (addons-174699) DBG | unable to find host DHCP lease matching {name: "addons-174699", mac: "52:54:00:73:c0:fb", ip: "192.168.39.71"} in network mk-addons-174699
	I0213 21:57:13.533472   16933 main.go:141] libmachine: (addons-174699) Reserved static IP address: 192.168.39.71
	I0213 21:57:13.533502   16933 main.go:141] libmachine: (addons-174699) Waiting for SSH to be available...
	I0213 21:57:13.533517   16933 main.go:141] libmachine: (addons-174699) DBG | Getting to WaitForSSH function...
	I0213 21:57:13.536344   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.536830   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:13.536865   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.536999   16933 main.go:141] libmachine: (addons-174699) DBG | Using SSH client type: external
	I0213 21:57:13.537031   16933 main.go:141] libmachine: (addons-174699) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa (-rw-------)
	I0213 21:57:13.537075   16933 main.go:141] libmachine: (addons-174699) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 21:57:13.537091   16933 main.go:141] libmachine: (addons-174699) DBG | About to run SSH command:
	I0213 21:57:13.537118   16933 main.go:141] libmachine: (addons-174699) DBG | exit 0
	I0213 21:57:13.632094   16933 main.go:141] libmachine: (addons-174699) DBG | SSH cmd err, output: <nil>: 
	I0213 21:57:13.632343   16933 main.go:141] libmachine: (addons-174699) KVM machine creation complete!
	I0213 21:57:13.632680   16933 main.go:141] libmachine: (addons-174699) Calling .GetConfigRaw
	I0213 21:57:13.633228   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:13.633420   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:13.633625   16933 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0213 21:57:13.633639   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:13.634932   16933 main.go:141] libmachine: Detecting operating system of created instance...
	I0213 21:57:13.634945   16933 main.go:141] libmachine: Waiting for SSH to be available...
	I0213 21:57:13.634952   16933 main.go:141] libmachine: Getting to WaitForSSH function...
	I0213 21:57:13.634958   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:13.638625   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.638977   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:13.639003   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.639153   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:13.639322   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:13.639503   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:13.639673   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:13.639873   16933 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:13.640233   16933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 21:57:13.640248   16933 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0213 21:57:13.755792   16933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 21:57:13.755821   16933 main.go:141] libmachine: Detecting the provisioner...
	I0213 21:57:13.755834   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:13.758719   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.759131   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:13.759167   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.759343   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:13.759570   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:13.759771   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:13.759927   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:13.760072   16933 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:13.760430   16933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 21:57:13.760445   16933 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0213 21:57:13.877112   16933 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0213 21:57:13.877203   16933 main.go:141] libmachine: found compatible host: buildroot
	I0213 21:57:13.877221   16933 main.go:141] libmachine: Provisioning with buildroot...
	I0213 21:57:13.877235   16933 main.go:141] libmachine: (addons-174699) Calling .GetMachineName
	I0213 21:57:13.877509   16933 buildroot.go:166] provisioning hostname "addons-174699"
	I0213 21:57:13.877534   16933 main.go:141] libmachine: (addons-174699) Calling .GetMachineName
	I0213 21:57:13.877766   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:13.880632   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.881083   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:13.881120   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:13.881342   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:13.881554   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:13.881715   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:13.881888   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:13.882076   16933 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:13.882451   16933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 21:57:13.882467   16933 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-174699 && echo "addons-174699" | sudo tee /etc/hostname
	I0213 21:57:14.012816   16933 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-174699
	
	I0213 21:57:14.012848   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:14.016144   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.016492   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.016518   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.016643   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:14.016827   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:14.017041   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:14.017290   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:14.017485   16933 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:14.017817   16933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 21:57:14.017836   16933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-174699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-174699/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-174699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 21:57:14.144738   16933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 21:57:14.144766   16933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8975/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8975/.minikube}
	I0213 21:57:14.144797   16933 buildroot.go:174] setting up certificates
	I0213 21:57:14.144808   16933 provision.go:83] configureAuth start
	I0213 21:57:14.144820   16933 main.go:141] libmachine: (addons-174699) Calling .GetMachineName
	I0213 21:57:14.145152   16933 main.go:141] libmachine: (addons-174699) Calling .GetIP
	I0213 21:57:14.147993   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.148464   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.148494   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.148690   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:14.151370   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.151670   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.151698   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.151839   16933 provision.go:138] copyHostCerts
	I0213 21:57:14.151909   16933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8975/.minikube/key.pem (1675 bytes)
	I0213 21:57:14.152053   16933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8975/.minikube/ca.pem (1082 bytes)
	I0213 21:57:14.152131   16933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8975/.minikube/cert.pem (1123 bytes)
	I0213 21:57:14.152190   16933 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca-key.pem org=jenkins.addons-174699 san=[192.168.39.71 192.168.39.71 localhost 127.0.0.1 minikube addons-174699]
	I0213 21:57:14.422480   16933 provision.go:172] copyRemoteCerts
	I0213 21:57:14.422541   16933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 21:57:14.422563   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:14.425294   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.425724   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.425754   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.425955   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:14.426175   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:14.426372   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:14.426521   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:14.513911   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 21:57:14.535690   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0213 21:57:14.557988   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0213 21:57:14.580182   16933 provision.go:86] duration metric: configureAuth took 435.362512ms
	I0213 21:57:14.580221   16933 buildroot.go:189] setting minikube options for container-runtime
	I0213 21:57:14.580473   16933 config.go:182] Loaded profile config "addons-174699": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 21:57:14.580501   16933 main.go:141] libmachine: Checking connection to Docker...
	I0213 21:57:14.580511   16933 main.go:141] libmachine: (addons-174699) Calling .GetURL
	I0213 21:57:14.581767   16933 main.go:141] libmachine: (addons-174699) DBG | Using libvirt version 6000000
	I0213 21:57:14.583925   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.584288   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.584339   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.584455   16933 main.go:141] libmachine: Docker is up and running!
	I0213 21:57:14.584470   16933 main.go:141] libmachine: Reticulating splines...
	I0213 21:57:14.584477   16933 client.go:171] LocalClient.Create took 27.28215701s
	I0213 21:57:14.584493   16933 start.go:167] duration metric: libmachine.API.Create for "addons-174699" took 27.282215624s
	I0213 21:57:14.584500   16933 start.go:300] post-start starting for "addons-174699" (driver="kvm2")
	I0213 21:57:14.584509   16933 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 21:57:14.584521   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:14.584797   16933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 21:57:14.584825   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:14.587158   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.587501   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.587526   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.587652   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:14.587800   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:14.587911   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:14.588010   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:14.674133   16933 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 21:57:14.678197   16933 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 21:57:14.678221   16933 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8975/.minikube/addons for local assets ...
	I0213 21:57:14.678299   16933 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8975/.minikube/files for local assets ...
	I0213 21:57:14.678326   16933 start.go:303] post-start completed in 93.82103ms
	I0213 21:57:14.678357   16933 main.go:141] libmachine: (addons-174699) Calling .GetConfigRaw
	I0213 21:57:14.678855   16933 main.go:141] libmachine: (addons-174699) Calling .GetIP
	I0213 21:57:14.681266   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.681636   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.681673   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.681901   16933 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/config.json ...
	I0213 21:57:14.682112   16933 start.go:128] duration metric: createHost completed in 27.398485723s
	I0213 21:57:14.682140   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:14.684212   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.684549   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.684576   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.684733   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:14.684932   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:14.685072   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:14.685245   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:14.685402   16933 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:14.685839   16933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 21:57:14.685853   16933 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 21:57:14.801238   16933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707861434.787821599
	
	I0213 21:57:14.801262   16933 fix.go:206] guest clock: 1707861434.787821599
	I0213 21:57:14.801272   16933 fix.go:219] Guest: 2024-02-13 21:57:14.787821599 +0000 UTC Remote: 2024-02-13 21:57:14.682127609 +0000 UTC m=+27.511863344 (delta=105.69399ms)
	I0213 21:57:14.801294   16933 fix.go:190] guest clock delta is within tolerance: 105.69399ms
	I0213 21:57:14.801301   16933 start.go:83] releasing machines lock for "addons-174699", held for 27.51775168s
	I0213 21:57:14.801342   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:14.801600   16933 main.go:141] libmachine: (addons-174699) Calling .GetIP
	I0213 21:57:14.804532   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.804852   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.804870   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.805137   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:14.805711   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:14.805874   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:14.805974   16933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 21:57:14.806017   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:14.806112   16933 ssh_runner.go:195] Run: cat /version.json
	I0213 21:57:14.806132   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:14.808650   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.808892   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.809060   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.809088   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.809206   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:14.809228   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:14.809236   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:14.809423   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:14.809498   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:14.809561   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:14.809648   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:14.809728   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:14.809752   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:14.809919   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:14.893683   16933 ssh_runner.go:195] Run: systemctl --version
	I0213 21:57:14.923127   16933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 21:57:14.928837   16933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 21:57:14.928902   16933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 21:57:14.944627   16933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 21:57:14.944652   16933 start.go:475] detecting cgroup driver to use...
	I0213 21:57:14.944721   16933 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0213 21:57:14.977889   16933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 21:57:14.990498   16933 docker.go:217] disabling cri-docker service (if available) ...
	I0213 21:57:14.990565   16933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 21:57:15.002613   16933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 21:57:15.014986   16933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 21:57:15.116265   16933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 21:57:15.238143   16933 docker.go:233] disabling docker service ...
	I0213 21:57:15.238239   16933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 21:57:15.252183   16933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 21:57:15.263868   16933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 21:57:15.377451   16933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 21:57:15.494796   16933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 21:57:15.506927   16933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 21:57:15.523871   16933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 21:57:15.532886   16933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 21:57:15.542000   16933 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 21:57:15.542067   16933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 21:57:15.551024   16933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 21:57:15.559759   16933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 21:57:15.568463   16933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 21:57:15.577004   16933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 21:57:15.585989   16933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 21:57:15.595011   16933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 21:57:15.602870   16933 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 21:57:15.602930   16933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 21:57:15.615457   16933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 21:57:15.624496   16933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 21:57:15.722411   16933 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 21:57:15.751722   16933 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0213 21:57:15.751822   16933 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0213 21:57:15.757246   16933 retry.go:31] will retry after 1.476559131s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0213 21:57:17.234216   16933 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0213 21:57:17.240154   16933 start.go:543] Will wait 60s for crictl version
	I0213 21:57:17.240236   16933 ssh_runner.go:195] Run: which crictl
	I0213 21:57:17.243819   16933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 21:57:17.279971   16933 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0213 21:57:17.280075   16933 ssh_runner.go:195] Run: containerd --version
	I0213 21:57:17.308184   16933 ssh_runner.go:195] Run: containerd --version
	I0213 21:57:17.336497   16933 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0213 21:57:17.338256   16933 main.go:141] libmachine: (addons-174699) Calling .GetIP
	I0213 21:57:17.341090   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:17.341439   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:17.341467   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:17.341707   16933 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 21:57:17.345613   16933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 21:57:17.357547   16933 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0213 21:57:17.357617   16933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 21:57:17.396745   16933 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 21:57:17.396807   16933 ssh_runner.go:195] Run: which lz4
	I0213 21:57:17.400632   16933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 21:57:17.404527   16933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 21:57:17.404563   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0213 21:57:19.203884   16933 containerd.go:548] Took 1.803293 seconds to copy over tarball
	I0213 21:57:19.203962   16933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 21:57:22.319783   16933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115790437s)
	I0213 21:57:22.363899   16933 containerd.go:555] Took 3.159965 seconds to extract the tarball
	I0213 21:57:22.363917   16933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 21:57:22.406352   16933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 21:57:22.514216   16933 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 21:57:22.539829   16933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 21:57:22.582362   16933 retry.go:31] will retry after 162.305074ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-13T21:57:22Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0213 21:57:22.745755   16933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 21:57:22.785192   16933 containerd.go:612] all images are preloaded for containerd runtime.
	I0213 21:57:22.785215   16933 cache_images.go:84] Images are preloaded, skipping loading
	I0213 21:57:22.785285   16933 ssh_runner.go:195] Run: sudo crictl info
	I0213 21:57:22.822962   16933 cni.go:84] Creating CNI manager for ""
	I0213 21:57:22.822989   16933 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0213 21:57:22.823013   16933 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 21:57:22.823039   16933 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-174699 NodeName:addons-174699 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 21:57:22.823188   16933 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-174699"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 21:57:22.823282   16933 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-174699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-174699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 21:57:22.823370   16933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 21:57:22.833211   16933 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 21:57:22.833298   16933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 21:57:22.842737   16933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0213 21:57:22.859182   16933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 21:57:22.875214   16933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0213 21:57:22.891647   16933 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0213 21:57:22.895663   16933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 21:57:22.907703   16933 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699 for IP: 192.168.39.71
	I0213 21:57:22.907741   16933 certs.go:190] acquiring lock for shared ca certs: {Name:mk6ca343a3ff2d77b3bf73683323fdbb8d4504aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:22.907887   16933 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18171-8975/.minikube/ca.key
	I0213 21:57:23.015901   16933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8975/.minikube/ca.crt ...
	I0213 21:57:23.015932   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/ca.crt: {Name:mk7b868582cda676b3abaf5594b15636590d94c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.016093   16933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8975/.minikube/ca.key ...
	I0213 21:57:23.016103   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/ca.key: {Name:mka5394f96fb53f73269c478c3b37ace584b0dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.016178   16933 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18171-8975/.minikube/proxy-client-ca.key
	I0213 21:57:23.203747   16933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8975/.minikube/proxy-client-ca.crt ...
	I0213 21:57:23.203781   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/proxy-client-ca.crt: {Name:mk4fe84f3a0068e1413248efbf702d346867c338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.203944   16933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8975/.minikube/proxy-client-ca.key ...
	I0213 21:57:23.203955   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/proxy-client-ca.key: {Name:mk78643a465f5c03acc881eae938de69438d573d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.204054   16933 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.key
	I0213 21:57:23.204067   16933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt with IP's: []
	I0213 21:57:23.354739   16933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt ...
	I0213 21:57:23.354772   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: {Name:mkc3d6d44060ba495d9810a7a92777865738a135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.354934   16933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.key ...
	I0213 21:57:23.354945   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.key: {Name:mkdd38396d3b351898d18f46638e869f42468a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.355016   16933 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.key.f4667c0f
	I0213 21:57:23.355032   16933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.crt.f4667c0f with IP's: [192.168.39.71 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 21:57:23.419680   16933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.crt.f4667c0f ...
	I0213 21:57:23.419715   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.crt.f4667c0f: {Name:mk56a4bb1f4fc03202419cef8aa6ba23e5725d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.419869   16933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.key.f4667c0f ...
	I0213 21:57:23.419880   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.key.f4667c0f: {Name:mk9b24aa740c74d3468c22cfa141560f25175fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.419942   16933 certs.go:337] copying /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.crt.f4667c0f -> /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.crt
	I0213 21:57:23.420004   16933 certs.go:341] copying /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.key.f4667c0f -> /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.key
	I0213 21:57:23.420051   16933 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/proxy-client.key
	I0213 21:57:23.420067   16933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/proxy-client.crt with IP's: []
	I0213 21:57:23.632970   16933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/proxy-client.crt ...
	I0213 21:57:23.633001   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/proxy-client.crt: {Name:mk68c81f5dc7a0fcad4dcf67cc55ee352b08c8fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.633150   16933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/proxy-client.key ...
	I0213 21:57:23.633161   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/proxy-client.key: {Name:mkde34b73f3355dff44036dad5f1d23b083a0744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:23.633313   16933 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca-key.pem (1679 bytes)
	I0213 21:57:23.633350   16933 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/home/jenkins/minikube-integration/18171-8975/.minikube/certs/ca.pem (1082 bytes)
	I0213 21:57:23.633375   16933 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/home/jenkins/minikube-integration/18171-8975/.minikube/certs/cert.pem (1123 bytes)
	I0213 21:57:23.633395   16933 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8975/.minikube/certs/home/jenkins/minikube-integration/18171-8975/.minikube/certs/key.pem (1675 bytes)
	I0213 21:57:23.633946   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 21:57:23.657426   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 21:57:23.679007   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 21:57:23.700291   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 21:57:23.721264   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 21:57:23.741932   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 21:57:23.763246   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 21:57:23.785192   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0213 21:57:23.807302   16933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 21:57:23.829358   16933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 21:57:23.845190   16933 ssh_runner.go:195] Run: openssl version
	I0213 21:57:23.850748   16933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 21:57:23.861057   16933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:23.865483   16933 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:23.865537   16933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:23.871072   16933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 21:57:23.881480   16933 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 21:57:23.885330   16933 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 21:57:23.885398   16933 kubeadm.go:404] StartCluster: {Name:addons-174699 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-174699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:57:23.885458   16933 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0213 21:57:23.885507   16933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 21:57:23.924002   16933 cri.go:89] found id: ""
	I0213 21:57:23.924068   16933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 21:57:23.933779   16933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 21:57:23.943174   16933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 21:57:23.952153   16933 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 21:57:23.952192   16933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 21:57:24.008788   16933 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 21:57:24.008859   16933 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 21:57:24.142461   16933 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 21:57:24.142659   16933 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 21:57:24.142780   16933 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 21:57:24.360117   16933 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 21:57:24.362644   16933 out.go:204]   - Generating certificates and keys ...
	I0213 21:57:24.362748   16933 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 21:57:24.362836   16933 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 21:57:24.466877   16933 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 21:57:24.775184   16933 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 21:57:24.973106   16933 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 21:57:25.093145   16933 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 21:57:25.210394   16933 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 21:57:25.210645   16933 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-174699 localhost] and IPs [192.168.39.71 127.0.0.1 ::1]
	I0213 21:57:25.374603   16933 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 21:57:25.374762   16933 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-174699 localhost] and IPs [192.168.39.71 127.0.0.1 ::1]
	I0213 21:57:25.598311   16933 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 21:57:25.823407   16933 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 21:57:26.062158   16933 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 21:57:26.062429   16933 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 21:57:26.285795   16933 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 21:57:26.603616   16933 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 21:57:26.730045   16933 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 21:57:26.885767   16933 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 21:57:26.886412   16933 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 21:57:26.891024   16933 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 21:57:26.893458   16933 out.go:204]   - Booting up control plane ...
	I0213 21:57:26.893563   16933 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 21:57:26.893649   16933 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 21:57:26.894000   16933 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 21:57:26.913611   16933 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 21:57:26.913756   16933 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 21:57:26.913814   16933 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 21:57:27.033564   16933 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 21:57:34.535322   16933 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502684 seconds
	I0213 21:57:34.535493   16933 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 21:57:34.555349   16933 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 21:57:35.089978   16933 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 21:57:35.090174   16933 kubeadm.go:322] [mark-control-plane] Marking the node addons-174699 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 21:57:35.605791   16933 kubeadm.go:322] [bootstrap-token] Using token: q771su.7xl4miu6qeoclmis
	I0213 21:57:35.607453   16933 out.go:204]   - Configuring RBAC rules ...
	I0213 21:57:35.607591   16933 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 21:57:35.623165   16933 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 21:57:35.633676   16933 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 21:57:35.637421   16933 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 21:57:35.646373   16933 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 21:57:35.651103   16933 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 21:57:35.667372   16933 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 21:57:35.886316   16933 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 21:57:36.029490   16933 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 21:57:36.031006   16933 kubeadm.go:322] 
	I0213 21:57:36.031081   16933 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 21:57:36.031101   16933 kubeadm.go:322] 
	I0213 21:57:36.031186   16933 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 21:57:36.031203   16933 kubeadm.go:322] 
	I0213 21:57:36.031229   16933 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 21:57:36.031350   16933 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 21:57:36.031423   16933 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 21:57:36.031442   16933 kubeadm.go:322] 
	I0213 21:57:36.031520   16933 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 21:57:36.031532   16933 kubeadm.go:322] 
	I0213 21:57:36.031604   16933 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 21:57:36.031620   16933 kubeadm.go:322] 
	I0213 21:57:36.031688   16933 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 21:57:36.031792   16933 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 21:57:36.031879   16933 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 21:57:36.031891   16933 kubeadm.go:322] 
	I0213 21:57:36.031998   16933 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 21:57:36.032115   16933 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 21:57:36.032126   16933 kubeadm.go:322] 
	I0213 21:57:36.032230   16933 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token q771su.7xl4miu6qeoclmis \
	I0213 21:57:36.032316   16933 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4ca6857fc9a6df724eaa4463824f875c579d39986902635e4c4bf2879dc76bfb \
	I0213 21:57:36.032367   16933 kubeadm.go:322] 	--control-plane 
	I0213 21:57:36.032376   16933 kubeadm.go:322] 
	I0213 21:57:36.032488   16933 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 21:57:36.032497   16933 kubeadm.go:322] 
	I0213 21:57:36.032605   16933 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token q771su.7xl4miu6qeoclmis \
	I0213 21:57:36.032748   16933 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4ca6857fc9a6df724eaa4463824f875c579d39986902635e4c4bf2879dc76bfb 
	I0213 21:57:36.033955   16933 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 21:57:36.033978   16933 cni.go:84] Creating CNI manager for ""
	I0213 21:57:36.033985   16933 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0213 21:57:36.036088   16933 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 21:57:36.037583   16933 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 21:57:36.053093   16933 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 21:57:36.085222   16933 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 21:57:36.085332   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=addons-174699 minikube.k8s.io/updated_at=2024_02_13T21_57_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:36.085349   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:36.123145   16933 ops.go:34] apiserver oom_adj: -16
	I0213 21:57:36.403834   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:36.904347   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:37.404671   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:37.904518   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:38.404785   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:38.904001   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:39.404020   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:39.904207   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:40.404337   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:40.904858   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:41.404648   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:41.904280   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:42.404231   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:42.904429   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:43.404076   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:43.904623   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:44.404721   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:44.904673   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:45.404733   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:45.904609   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:46.404808   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:46.904836   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:47.404496   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:47.904826   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:48.404179   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:48.904778   16933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:49.170496   16933 kubeadm.go:1088] duration metric: took 13.085247096s to wait for elevateKubeSystemPrivileges.
	I0213 21:57:49.170535   16933 kubeadm.go:406] StartCluster complete in 25.285162424s
	I0213 21:57:49.170556   16933 settings.go:142] acquiring lock: {Name:mk91092fd9e2735668756fe4718a0d771f1e5360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:49.170705   16933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8975/kubeconfig
	I0213 21:57:49.171132   16933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/kubeconfig: {Name:mk78ba69bd7ab8f34c55fb615c71bc14320f5294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:49.171352   16933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 21:57:49.171432   16933 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0213 21:57:49.171543   16933 addons.go:69] Setting yakd=true in profile "addons-174699"
	I0213 21:57:49.171555   16933 config.go:182] Loaded profile config "addons-174699": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 21:57:49.171566   16933 addons.go:69] Setting cloud-spanner=true in profile "addons-174699"
	I0213 21:57:49.171578   16933 addons.go:234] Setting addon yakd=true in "addons-174699"
	I0213 21:57:49.171595   16933 addons.go:234] Setting addon cloud-spanner=true in "addons-174699"
	I0213 21:57:49.171613   16933 addons.go:69] Setting gcp-auth=true in profile "addons-174699"
	I0213 21:57:49.171634   16933 mustload.go:65] Loading cluster: addons-174699
	I0213 21:57:49.171637   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.171667   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.171667   16933 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-174699"
	I0213 21:57:49.171695   16933 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-174699"
	I0213 21:57:49.171678   16933 addons.go:69] Setting default-storageclass=true in profile "addons-174699"
	I0213 21:57:49.171702   16933 addons.go:69] Setting storage-provisioner=true in profile "addons-174699"
	I0213 21:57:49.171733   16933 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-174699"
	I0213 21:57:49.171738   16933 addons.go:234] Setting addon storage-provisioner=true in "addons-174699"
	I0213 21:57:49.171768   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.171817   16933 config.go:182] Loaded profile config "addons-174699": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 21:57:49.171850   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.171558   16933 addons.go:69] Setting helm-tiller=true in profile "addons-174699"
	I0213 21:57:49.172135   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.172150   16933 addons.go:69] Setting ingress-dns=true in profile "addons-174699"
	I0213 21:57:49.172162   16933 addons.go:234] Setting addon ingress-dns=true in "addons-174699"
	I0213 21:57:49.172178   16933 addons.go:69] Setting registry=true in profile "addons-174699"
	I0213 21:57:49.172183   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172184   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.172191   16933 addons.go:234] Setting addon registry=true in "addons-174699"
	I0213 21:57:49.172213   16933 addons.go:69] Setting inspektor-gadget=true in profile "addons-174699"
	I0213 21:57:49.172228   16933 addons.go:234] Setting addon inspektor-gadget=true in "addons-174699"
	I0213 21:57:49.172229   16933 addons.go:69] Setting ingress=true in profile "addons-174699"
	I0213 21:57:49.172237   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.172240   16933 addons.go:234] Setting addon ingress=true in "addons-174699"
	I0213 21:57:49.172236   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.172251   16933 addons.go:69] Setting volumesnapshots=true in profile "addons-174699"
	I0213 21:57:49.172265   16933 addons.go:234] Setting addon volumesnapshots=true in "addons-174699"
	I0213 21:57:49.172268   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.172289   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172294   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.172299   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.172353   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172374   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172400   16933 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-174699"
	I0213 21:57:49.172138   16933 addons.go:234] Setting addon helm-tiller=true in "addons-174699"
	I0213 21:57:49.172447   16933 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-174699"
	I0213 21:57:49.172468   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.172493   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172601   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.172617   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172661   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.172669   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.172687   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172745   16933 addons.go:69] Setting metrics-server=true in profile "addons-174699"
	I0213 21:57:49.172760   16933 addons.go:234] Setting addon metrics-server=true in "addons-174699"
	I0213 21:57:49.172914   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.172974   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172983   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.173006   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.173018   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.173040   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.172217   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.173106   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.173398   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.173423   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.173558   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.173588   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.173779   16933 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-174699"
	I0213 21:57:49.173817   16933 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-174699"
	I0213 21:57:49.173829   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.174196   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.174231   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.174236   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.174301   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.174348   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.189852   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0213 21:57:49.191584   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38635
	I0213 21:57:49.202538   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.202595   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.203536   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.203659   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.204228   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.204254   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.204423   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.204437   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.204890   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.204955   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.205532   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.205569   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.206145   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.206184   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.232298   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45061
	I0213 21:57:49.232944   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.233512   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.233542   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.233974   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.234563   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.234591   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.237826   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0213 21:57:49.240072   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0213 21:57:49.240390   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.240915   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.240933   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.241023   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.241400   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.241949   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.242001   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.242338   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.242351   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.242666   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I0213 21:57:49.242848   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36529
	I0213 21:57:49.242854   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.243411   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.243449   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.243682   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.244250   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.244267   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.244635   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.244819   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.244879   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.245155   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0213 21:57:49.245407   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.245421   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.245840   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.245913   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0213 21:57:49.246334   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.246811   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.246832   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.247181   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.247337   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.248448   16933 addons.go:234] Setting addon default-storageclass=true in "addons-174699"
	I0213 21:57:49.248491   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.249018   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.251165   16933 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 21:57:49.249390   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.249514   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.249964   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.250425   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44295
	I0213 21:57:49.252922   16933 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 21:57:49.252934   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.252936   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 21:57:49.253028   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.253383   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42873
	I0213 21:57:49.253907   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.254001   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.254476   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.254493   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.254812   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.254828   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.255255   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.255822   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.255857   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.256088   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.256688   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.256711   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.256725   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.256940   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.256955   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.257173   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.257368   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.257525   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.257658   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.258352   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.260859   16933 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0213 21:57:49.259129   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.259684   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.259723   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0213 21:57:49.262066   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0213 21:57:49.262445   16933 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0213 21:57:49.262469   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0213 21:57:49.262491   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.262549   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.262955   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45249
	I0213 21:57:49.263517   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.263521   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.263577   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.263618   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.264203   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.264222   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.264360   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.264525   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.264537   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.265357   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.265769   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.267623   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.268026   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.268055   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.268225   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.268388   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.268515   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.268632   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.269436   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.269464   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.270283   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.270878   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.270901   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.271135   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.271486   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.271519   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.272419   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I0213 21:57:49.272753   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.273059   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.273208   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.273220   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.273508   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.273673   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.274073   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.274105   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.276586   16933 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-174699"
	I0213 21:57:49.276628   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:49.277029   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.277065   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.277669   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46101
	I0213 21:57:49.278090   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.278188   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0213 21:57:49.278497   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.278513   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.278846   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.279047   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.280984   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.281070   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0213 21:57:49.283322   16933 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:57:49.281864   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.281916   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.282518   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0213 21:57:49.286761   16933 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0213 21:57:49.285915   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.285985   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.286005   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.288511   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.288579   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.288647   16933 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:57:49.290886   16933 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 21:57:49.290904   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0213 21:57:49.290922   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.288866   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.288897   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.288958   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.291019   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.291463   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.291654   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.292093   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.292132   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.293126   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.293205   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0213 21:57:49.294920   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.295013   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.295695   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0213 21:57:49.295968   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.296005   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.296223   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0213 21:57:49.296426   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.296997   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.297002   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.297021   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.297051   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.297072   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.297093   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.298479   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0213 21:57:49.300437   16933 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0213 21:57:49.300454   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0213 21:57:49.300473   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.302319   16933 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0213 21:57:49.297587   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.297618   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.297656   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.297658   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.302610   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0213 21:57:49.303615   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I0213 21:57:49.304018   16933 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 21:57:49.304029   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0213 21:57:49.304046   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.304018   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.305122   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.305203   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.305269   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.305912   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
	I0213 21:57:49.306037   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.306090   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.306209   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.306226   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.306691   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.306715   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.307385   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.307433   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.307490   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.307536   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.309839   16933 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0213 21:57:49.308086   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.308136   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.308302   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.308381   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.308540   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.308708   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.308747   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.309090   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.309243   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.313743   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40481
	I0213 21:57:49.315961   16933 out.go:177]   - Using image docker.io/registry:2.8.3
	I0213 21:57:49.314479   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.314505   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.314537   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.314622   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.315263   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.315283   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0213 21:57:49.315405   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.315444   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.315551   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.316448   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.317699   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.317798   16933 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0213 21:57:49.318280   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.320207   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.320269   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.320294   16933 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0213 21:57:49.321809   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I0213 21:57:49.320342   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.321903   16933 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0213 21:57:49.321916   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0213 21:57:49.321933   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.320381   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0213 21:57:49.321959   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.320575   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0213 21:57:49.320614   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.323662   16933 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0213 21:57:49.320768   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.320792   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.320807   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.321290   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.322166   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.322244   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.322518   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.322682   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.325716   16933 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0213 21:57:49.325731   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0213 21:57:49.325746   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.326070   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.326494   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.326530   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.326910   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.326997   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.327030   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.327380   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.327396   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.327450   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.327889   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.328009   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.328023   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.328074   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.328376   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.328761   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.328792   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.329162   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:49.329192   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:49.329536   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.329735   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.329797   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.329914   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.330044   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.330562   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.330614   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.330643   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.331675   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.331684   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.331741   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.333529   16933 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0213 21:57:49.332069   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.332073   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.332389   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.332576   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.336214   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0213 21:57:49.334916   16933 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0213 21:57:49.335058   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.333570   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.336275   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.336435   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.339557   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0213 21:57:49.337898   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.337924   16933 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0213 21:57:49.338165   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.338582   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.342310   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0213 21:57:49.341035   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0213 21:57:49.341142   16933 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 21:57:49.341396   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.343877   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0213 21:57:49.343821   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0213 21:57:49.342381   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.342390   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0213 21:57:49.341481   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.345319   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0213 21:57:49.345429   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.346756   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0213 21:57:49.346354   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.347829   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0213 21:57:49.348285   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0213 21:57:49.348810   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.348975   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.349069   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0213 21:57:49.350379   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.350435   16933 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	W0213 21:57:49.351872   16933 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-174699" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0213 21:57:49.351891   16933 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0213 21:57:49.351916   16933 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0213 21:57:49.351910   16933 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0213 21:57:49.351926   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0213 21:57:49.354330   16933 out.go:177] * Verifying Kubernetes components...
	I0213 21:57:49.351956   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.350909   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.350933   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:49.351100   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.351112   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.351624   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.350495   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.350861   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.356380   16933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 21:57:49.356396   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.356382   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.356437   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.356464   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.356596   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.356632   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.356713   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.356759   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.356804   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.357258   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.357370   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:49.357388   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:49.357307   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.357314   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.357554   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.357754   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:49.357984   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.358035   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:49.359609   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.361307   16933 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0213 21:57:49.360109   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.361283   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:49.361529   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.362018   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.362665   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.362688   16933 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 21:57:49.362696   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.362699   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 21:57:49.362717   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.362886   16933 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 21:57:49.362896   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 21:57:49.362907   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.364363   16933 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0213 21:57:49.362952   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.367095   16933 out.go:177]   - Using image docker.io/busybox:stable
	I0213 21:57:49.365904   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.365970   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.366417   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.366459   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.366917   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.368404   16933 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 21:57:49.368421   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0213 21:57:49.368428   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.368436   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:49.368458   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.368488   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.368485   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.369070   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.369089   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.369129   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.369255   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.369410   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.369457   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.369625   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	W0213 21:57:49.370975   16933 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34802->192.168.39.71:22: read: connection reset by peer
	I0213 21:57:49.371003   16933 retry.go:31] will retry after 313.202818ms: ssh: handshake failed: read tcp 192.168.39.1:34802->192.168.39.71:22: read: connection reset by peer
	I0213 21:57:49.371459   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.371745   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:49.371769   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:49.371893   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:49.371985   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:49.372046   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:49.372100   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:49.551629   16933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 21:57:49.568692   16933 node_ready.go:35] waiting up to 6m0s for node "addons-174699" to be "Ready" ...
	I0213 21:57:49.572097   16933 node_ready.go:49] node "addons-174699" has status "Ready":"True"
	I0213 21:57:49.572116   16933 node_ready.go:38] duration metric: took 3.37809ms waiting for node "addons-174699" to be "Ready" ...
	I0213 21:57:49.572125   16933 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 21:57:49.581894   16933 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-45snv" in "kube-system" namespace to be "Ready" ...
	I0213 21:57:49.597899   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 21:57:49.608140   16933 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 21:57:49.608158   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0213 21:57:49.652843   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 21:57:49.658038   16933 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0213 21:57:49.658063   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0213 21:57:49.679774   16933 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0213 21:57:49.679798   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0213 21:57:49.688024   16933 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0213 21:57:49.688047   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0213 21:57:49.704106   16933 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0213 21:57:49.704125   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0213 21:57:49.708176   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0213 21:57:49.751280   16933 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 21:57:49.751300   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 21:57:49.836757   16933 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0213 21:57:49.836781   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0213 21:57:49.861865   16933 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0213 21:57:49.861885   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0213 21:57:49.863147   16933 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0213 21:57:49.863164   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0213 21:57:49.863467   16933 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0213 21:57:49.863482   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0213 21:57:49.870732   16933 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0213 21:57:49.870753   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0213 21:57:49.877641   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 21:57:49.886702   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 21:57:49.916789   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 21:57:49.973913   16933 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0213 21:57:49.973933   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0213 21:57:49.998749   16933 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0213 21:57:49.998776   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0213 21:57:50.001313   16933 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0213 21:57:50.001339   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0213 21:57:50.023201   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0213 21:57:50.050524   16933 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0213 21:57:50.050547   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0213 21:57:50.065943   16933 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 21:57:50.065963   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 21:57:50.204180   16933 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0213 21:57:50.204209   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0213 21:57:50.365328   16933 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0213 21:57:50.365362   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0213 21:57:50.383515   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 21:57:50.467882   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0213 21:57:50.585905   16933 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0213 21:57:50.585929   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0213 21:57:50.589866   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 21:57:50.661909   16933 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0213 21:57:50.661937   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0213 21:57:50.690919   16933 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0213 21:57:50.690949   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0213 21:57:50.825595   16933 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0213 21:57:50.825620   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0213 21:57:51.028529   16933 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:57:51.028557   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0213 21:57:51.170393   16933 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0213 21:57:51.170417   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0213 21:57:51.181975   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0213 21:57:51.207935   16933 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0213 21:57:51.207957   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0213 21:57:51.380391   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:57:51.608975   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:57:51.723564   16933 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0213 21:57:51.723594   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0213 21:57:52.023679   16933 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0213 21:57:52.023708   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0213 21:57:52.327434   16933 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0213 21:57:52.327460   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0213 21:57:52.444158   16933 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0213 21:57:52.444181   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0213 21:57:52.808276   16933 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 21:57:52.808302   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0213 21:57:52.834683   16933 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0213 21:57:52.834715   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0213 21:57:53.097918   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 21:57:53.115979   16933 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0213 21:57:53.116003   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0213 21:57:53.297983   16933 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 21:57:53.298009   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0213 21:57:53.541543   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 21:57:53.917794   16933 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.366094466s)
	I0213 21:57:53.917830   16933 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 21:57:54.088979   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:57:54.694188   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.096246578s)
	I0213 21:57:54.694245   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:54.694259   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:57:54.694641   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:54.694662   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:54.694672   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:54.694682   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:57:54.694934   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:54.694952   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:54.694976   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:57:55.951796   16933 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0213 21:57:55.951838   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:55.954785   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:55.955276   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:55.955307   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:55.955502   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:55.955694   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:55.955875   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:55.956049   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:56.132359   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:57:57.039731   16933 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0213 21:57:57.411429   16933 addons.go:234] Setting addon gcp-auth=true in "addons-174699"
	I0213 21:57:57.411481   16933 host.go:66] Checking if "addons-174699" exists ...
	I0213 21:57:57.411815   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:57.411842   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:57.427576   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0213 21:57:57.427989   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:57.428493   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:57.428513   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:57.428833   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:57.429374   16933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 21:57:57.429403   16933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:57.444951   16933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I0213 21:57:57.445376   16933 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:57.445941   16933 main.go:141] libmachine: Using API Version  1
	I0213 21:57:57.445964   16933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:57.446337   16933 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:57.446539   16933 main.go:141] libmachine: (addons-174699) Calling .GetState
	I0213 21:57:57.448456   16933 main.go:141] libmachine: (addons-174699) Calling .DriverName
	I0213 21:57:57.448719   16933 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0213 21:57:57.448741   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHHostname
	I0213 21:57:57.451354   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:57.451782   16933 main.go:141] libmachine: (addons-174699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:fb", ip: ""} in network mk-addons-174699: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:03 +0000 UTC Type:0 Mac:52:54:00:73:c0:fb Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-174699 Clientid:01:52:54:00:73:c0:fb}
	I0213 21:57:57.451810   16933 main.go:141] libmachine: (addons-174699) DBG | domain addons-174699 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:c0:fb in network mk-addons-174699
	I0213 21:57:57.451952   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHPort
	I0213 21:57:57.452154   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHKeyPath
	I0213 21:57:57.452301   16933 main.go:141] libmachine: (addons-174699) Calling .GetSSHUsername
	I0213 21:57:57.452472   16933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/addons-174699/id_rsa Username:docker}
	I0213 21:57:58.638200   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:01.110304   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:01.285145   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.632268381s)
	I0213 21:58:01.285235   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285251   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285232   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.577027562s)
	I0213 21:58:01.285281   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.407616343s)
	I0213 21:58:01.285323   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285285   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285338   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285338   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.398617571s)
	I0213 21:58:01.285356   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285366   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285367   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285461   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.262229395s)
	I0213 21:58:01.285484   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285488   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.901942241s)
	I0213 21:58:01.285496   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285508   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285570   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285557   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.817646877s)
	I0213 21:58:01.285619   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.695723486s)
	I0213 21:58:01.285642   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285655   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285645   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285709   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285716   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.10370823s)
	I0213 21:58:01.285736   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285764   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.285767   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.368602473s)
	I0213 21:58:01.285949   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.905529989s)
	I0213 21:58:01.285976   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.285990   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	W0213 21:58:01.285992   16933 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 21:58:01.286044   16933 retry.go:31] will retry after 146.149371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 21:58:01.286098   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.188143861s)
	I0213 21:58:01.286149   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.286161   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.287537   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.287546   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.287575   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.287594   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.287594   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.287604   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.287613   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.287621   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.287631   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.287641   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.287649   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.287876   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.287888   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.287897   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.287905   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.287916   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.287959   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.287981   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.287988   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.287997   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.288004   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.288044   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288063   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288070   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.288079   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.288088   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.288088   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288116   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288125   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.290235   16933 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-174699 service yakd-dashboard -n yakd-dashboard
	
	I0213 21:58:01.288221   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288281   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288305   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288312   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288338   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288357   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288354   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288375   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288390   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288408   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288416   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288422   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.288442   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288442   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.287576   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.288462   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.289448   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.289451   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.291757   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291785   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291803   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291821   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.291836   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291837   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291791   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291845   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.291851   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.291857   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.291861   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.291881   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.291884   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291773   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291896   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.291905   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.291906   16933 addons.go:470] Verifying addon registry=true in "addons-174699"
	I0213 21:58:01.291844   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291905   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.291932   16933 addons.go:470] Verifying addon metrics-server=true in "addons-174699"
	I0213 21:58:01.291941   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.291772   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.291956   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.293749   16933 out.go:177] * Verifying registry addon...
	I0213 21:58:01.291964   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.292157   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.295256   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.292241   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.292266   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.295355   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.292273   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.292292   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.295423   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.292333   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.295442   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.292349   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.292398   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.292402   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.295504   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.292427   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.295516   16933 addons.go:470] Verifying addon ingress=true in "addons-174699"
	I0213 21:58:01.293947   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.293970   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.296979   16933 out.go:177] * Verifying ingress addon...
	I0213 21:58:01.295566   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.295923   16933 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0213 21:58:01.299046   16933 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0213 21:58:01.316708   16933 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0213 21:58:01.316726   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:01.323254   16933 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0213 21:58:01.323325   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:01.327106   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.327122   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.327387   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.327402   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.327416   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	W0213 21:58:01.327489   16933 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0213 21:58:01.344334   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.344359   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:01.344626   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.344647   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.344668   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:01.433024   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:58:01.804930   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:01.805431   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:02.317611   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:02.317997   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:02.810353   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:02.810618   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:03.305999   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:03.306749   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:03.624153   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:03.796022   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.254431905s)
	I0213 21:58:03.796057   16933 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.347315829s)
	I0213 21:58:03.797720   16933 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:58:03.796062   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:03.799431   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:03.800965   16933 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0213 21:58:03.799688   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:03.799720   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:03.802625   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:03.802634   16933 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0213 21:58:03.802648   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0213 21:58:03.802653   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:03.802664   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:03.802931   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:03.802931   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:03.802961   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:03.802970   16933 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-174699"
	I0213 21:58:03.804584   16933 out.go:177] * Verifying csi-hostpath-driver addon...
	I0213 21:58:03.807305   16933 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0213 21:58:03.816505   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:03.816944   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:03.830454   16933 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0213 21:58:03.830478   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:03.938789   16933 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0213 21:58:03.938809   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0213 21:58:04.091983   16933 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 21:58:04.092002   16933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0213 21:58:04.149720   16933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 21:58:04.308827   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:04.310711   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:04.313537   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:04.806483   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:04.807021   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:04.816819   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:04.943222   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.510151327s)
	I0213 21:58:04.943286   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:04.943300   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:04.943606   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:04.943626   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:04.943638   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:04.943651   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:04.943687   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:04.943909   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:04.943927   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:05.305049   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:05.305574   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:05.313311   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:05.804838   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:05.804898   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:05.813691   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:06.090707   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:06.189685   16933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.039926161s)
	I0213 21:58:06.189750   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:06.189763   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:06.190132   16933 main.go:141] libmachine: (addons-174699) DBG | Closing plugin on server side
	I0213 21:58:06.190245   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:06.190264   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:06.190280   16933 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:06.190293   16933 main.go:141] libmachine: (addons-174699) Calling .Close
	I0213 21:58:06.190575   16933 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:06.190629   16933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:06.193429   16933 addons.go:470] Verifying addon gcp-auth=true in "addons-174699"
	I0213 21:58:06.195099   16933 out.go:177] * Verifying gcp-auth addon...
	I0213 21:58:06.197462   16933 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0213 21:58:06.201469   16933 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0213 21:58:06.201492   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:06.305784   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:06.305994   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:06.313735   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:06.701994   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:06.804033   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:06.808515   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:06.813127   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:07.202240   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:07.309497   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:07.309689   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:07.315819   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:07.702919   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:07.804799   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:07.805933   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:07.814098   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:08.201966   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:08.306079   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:08.306625   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:08.313725   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:08.590412   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:08.702550   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:08.806933   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:08.807334   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:08.814044   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:09.204120   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:09.306179   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:09.308531   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:09.314288   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:09.702230   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:09.805679   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:09.810243   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:09.813355   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:10.302444   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:10.308391   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:10.313431   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:10.316483   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:10.702216   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:10.803826   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:10.805786   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:10.817475   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:11.089538   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:11.201367   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:11.307182   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:11.307718   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:11.312391   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:11.701847   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:11.804959   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:11.809361   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:11.816138   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:12.208844   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:12.320925   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:12.323448   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:12.338132   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:12.701418   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:12.805858   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:12.807544   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:12.812471   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:13.201761   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:13.305273   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:13.307757   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:13.312418   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:13.589545   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:13.701912   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:13.805520   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:13.806381   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:13.812065   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:14.202602   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:14.307901   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:14.313436   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:14.314051   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:14.784088   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:14.808512   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:14.810069   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:14.819301   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:15.201932   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:15.304736   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:15.305673   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:15.314455   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:15.702159   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:15.805335   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:15.807191   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:15.812259   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:16.090822   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:16.201790   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:16.304272   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:16.307266   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:16.312326   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:16.701859   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:16.803868   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:16.806631   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:16.812915   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:17.202099   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:17.308337   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:17.309616   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:17.311877   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:17.702518   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:17.819609   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:17.820441   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:17.822372   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:18.201748   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:18.303197   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:18.304778   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:18.314037   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:18.589767   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:18.701664   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:18.806023   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:18.809973   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:18.812561   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:19.202385   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:19.305803   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:19.306140   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:19.312974   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:19.709192   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:19.805164   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:19.805740   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:19.813174   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:20.201878   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:20.305465   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:20.306131   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:20.323764   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:20.591484   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:20.702465   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:20.804442   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:20.804635   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:20.815656   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:21.202394   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:21.323251   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:21.339541   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:21.355238   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:21.701877   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:21.804082   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:21.806483   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:21.812274   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:22.204151   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:22.312386   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:22.313724   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:22.316793   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:22.702403   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:22.804468   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:22.805400   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:22.812995   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:23.089764   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:23.201772   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:23.321289   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:23.321357   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:23.324871   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:23.702178   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:23.806028   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:23.809867   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:23.815726   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:24.201847   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:24.307988   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:24.311473   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:24.315789   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:24.705625   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:24.804397   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:24.809016   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:24.814179   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:25.090746   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:25.206320   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:25.304351   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:25.305063   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:25.313190   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:25.702604   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:25.806604   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:25.807190   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:25.812314   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:26.201383   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:26.305591   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:26.307306   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:26.324476   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:26.701790   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:26.808973   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:26.809929   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:26.813936   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:27.413178   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:27.413834   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:27.416906   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:27.419517   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:27.419998   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:27.701831   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:27.803719   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:27.805781   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:27.813528   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:28.202581   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:28.305575   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:28.305772   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:28.312687   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:28.703271   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:28.803953   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:28.804419   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:28.812874   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:29.201637   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:29.304089   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:29.304576   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:29.325913   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:29.593131   16933 pod_ready.go:102] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:29.701900   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:29.806046   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:29.806172   16933 kapi.go:107] duration metric: took 28.51024661s to wait for kubernetes.io/minikube-addons=registry ...
	I0213 21:58:29.812637   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:30.202714   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:30.304455   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:30.314151   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:30.589187   16933 pod_ready.go:92] pod "coredns-5dd5756b68-45snv" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.589212   16933 pod_ready.go:81] duration metric: took 41.007296761s waiting for pod "coredns-5dd5756b68-45snv" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.589221   16933 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nrfrv" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.594147   16933 pod_ready.go:92] pod "coredns-5dd5756b68-nrfrv" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.594166   16933 pod_ready.go:81] duration metric: took 4.938471ms waiting for pod "coredns-5dd5756b68-nrfrv" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.594174   16933 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-174699" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.600633   16933 pod_ready.go:92] pod "etcd-addons-174699" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.600661   16933 pod_ready.go:81] duration metric: took 6.477689ms waiting for pod "etcd-addons-174699" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.600674   16933 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-174699" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.605675   16933 pod_ready.go:92] pod "kube-apiserver-addons-174699" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.605695   16933 pod_ready.go:81] duration metric: took 5.013513ms waiting for pod "kube-apiserver-addons-174699" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.605703   16933 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-174699" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.610495   16933 pod_ready.go:92] pod "kube-controller-manager-addons-174699" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.610515   16933 pod_ready.go:81] duration metric: took 4.806074ms waiting for pod "kube-controller-manager-addons-174699" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.610523   16933 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrz4z" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.702235   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:30.804690   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:30.818146   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:30.986672   16933 pod_ready.go:92] pod "kube-proxy-nrz4z" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.986693   16933 pod_ready.go:81] duration metric: took 376.163682ms waiting for pod "kube-proxy-nrz4z" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.986702   16933 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-174699" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:31.201997   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:31.305905   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:31.315397   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:31.414161   16933 pod_ready.go:92] pod "kube-scheduler-addons-174699" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:31.414185   16933 pod_ready.go:81] duration metric: took 427.477356ms waiting for pod "kube-scheduler-addons-174699" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:31.414194   16933 pod_ready.go:38] duration metric: took 41.842059618s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 21:58:31.414208   16933 api_server.go:52] waiting for apiserver process to appear ...
	I0213 21:58:31.414254   16933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 21:58:31.476897   16933 api_server.go:72] duration metric: took 42.124960095s to wait for apiserver process to appear ...
	I0213 21:58:31.476921   16933 api_server.go:88] waiting for apiserver healthz status ...
	I0213 21:58:31.476937   16933 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0213 21:58:31.485309   16933 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I0213 21:58:31.488008   16933 api_server.go:141] control plane version: v1.28.4
	I0213 21:58:31.488040   16933 api_server.go:131] duration metric: took 11.113523ms to wait for apiserver health ...
	I0213 21:58:31.488053   16933 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 21:58:31.597536   16933 system_pods.go:59] 19 kube-system pods found
	I0213 21:58:31.597564   16933 system_pods.go:61] "coredns-5dd5756b68-45snv" [ef7e94c2-4f10-4b4c-b67a-35c0de8cbc73] Running
	I0213 21:58:31.597570   16933 system_pods.go:61] "coredns-5dd5756b68-nrfrv" [cf390705-6389-45af-bd3d-934c48e226a5] Running
	I0213 21:58:31.597577   16933 system_pods.go:61] "csi-hostpath-attacher-0" [65355396-b021-432f-b5d1-836e9d18ea43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0213 21:58:31.597583   16933 system_pods.go:61] "csi-hostpath-resizer-0" [09feeb24-3ef4-4f24-87e7-b6aeef87d649] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0213 21:58:31.597603   16933 system_pods.go:61] "csi-hostpathplugin-llqll" [593bff97-f37e-48b1-9d04-f6d552b68ace] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0213 21:58:31.597609   16933 system_pods.go:61] "etcd-addons-174699" [5261ca82-5930-48f8-8b17-e81b52be791b] Running
	I0213 21:58:31.597614   16933 system_pods.go:61] "kube-apiserver-addons-174699" [efb6398c-7102-4bf3-a382-9039b78c5ac8] Running
	I0213 21:58:31.597618   16933 system_pods.go:61] "kube-controller-manager-addons-174699" [a281a0d3-aad9-485e-a6a7-587f9e7d89ef] Running
	I0213 21:58:31.597623   16933 system_pods.go:61] "kube-ingress-dns-minikube" [af52cf65-5f99-431f-9b4d-9180686db3e0] Running
	I0213 21:58:31.597627   16933 system_pods.go:61] "kube-proxy-nrz4z" [e91c8e5c-5c73-4c3d-8c63-5790160ff221] Running
	I0213 21:58:31.597630   16933 system_pods.go:61] "kube-scheduler-addons-174699" [2a5ca6d2-d718-4499-b9f1-d2ed868866a7] Running
	I0213 21:58:31.597634   16933 system_pods.go:61] "metrics-server-69cf46c98-vbmfc" [0b8dda7d-7308-43e2-aad8-dc0aa55d063e] Running
	I0213 21:58:31.597639   16933 system_pods.go:61] "nvidia-device-plugin-daemonset-8dzqr" [792f8293-6c14-461d-8057-a6a3dd4a96a9] Running
	I0213 21:58:31.597643   16933 system_pods.go:61] "registry-nq98k" [8572b586-44d6-44d9-8930-69d52e78c935] Running
	I0213 21:58:31.597647   16933 system_pods.go:61] "registry-proxy-pgkpv" [51d7fe9d-91e2-49a6-adeb-4a9f32c4fdb1] Running
	I0213 21:58:31.597654   16933 system_pods.go:61] "snapshot-controller-58dbcc7b99-b2gfg" [5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:31.597662   16933 system_pods.go:61] "snapshot-controller-58dbcc7b99-r2blh" [795b7e91-5569-43d9-a190-834862200040] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:31.597668   16933 system_pods.go:61] "storage-provisioner" [d332938b-8b65-4d3d-a6a2-2ee6d17d577b] Running
	I0213 21:58:31.597674   16933 system_pods.go:61] "tiller-deploy-7b677967b9-xmbn9" [8b859860-6572-453a-91cc-40aa67cb3030] Running
	I0213 21:58:31.597679   16933 system_pods.go:74] duration metric: took 109.622211ms to wait for pod list to return data ...
	I0213 21:58:31.597689   16933 default_sa.go:34] waiting for default service account to be created ...
	I0213 21:58:31.702998   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:31.785566   16933 default_sa.go:45] found service account: "default"
	I0213 21:58:31.785597   16933 default_sa.go:55] duration metric: took 187.900699ms for default service account to be created ...
	I0213 21:58:31.785609   16933 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 21:58:31.804196   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:31.814419   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:31.998906   16933 system_pods.go:86] 19 kube-system pods found
	I0213 21:58:31.998933   16933 system_pods.go:89] "coredns-5dd5756b68-45snv" [ef7e94c2-4f10-4b4c-b67a-35c0de8cbc73] Running
	I0213 21:58:31.998939   16933 system_pods.go:89] "coredns-5dd5756b68-nrfrv" [cf390705-6389-45af-bd3d-934c48e226a5] Running
	I0213 21:58:31.998946   16933 system_pods.go:89] "csi-hostpath-attacher-0" [65355396-b021-432f-b5d1-836e9d18ea43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0213 21:58:31.998952   16933 system_pods.go:89] "csi-hostpath-resizer-0" [09feeb24-3ef4-4f24-87e7-b6aeef87d649] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0213 21:58:31.998959   16933 system_pods.go:89] "csi-hostpathplugin-llqll" [593bff97-f37e-48b1-9d04-f6d552b68ace] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0213 21:58:31.998964   16933 system_pods.go:89] "etcd-addons-174699" [5261ca82-5930-48f8-8b17-e81b52be791b] Running
	I0213 21:58:31.998970   16933 system_pods.go:89] "kube-apiserver-addons-174699" [efb6398c-7102-4bf3-a382-9039b78c5ac8] Running
	I0213 21:58:31.998974   16933 system_pods.go:89] "kube-controller-manager-addons-174699" [a281a0d3-aad9-485e-a6a7-587f9e7d89ef] Running
	I0213 21:58:31.998983   16933 system_pods.go:89] "kube-ingress-dns-minikube" [af52cf65-5f99-431f-9b4d-9180686db3e0] Running
	I0213 21:58:31.998986   16933 system_pods.go:89] "kube-proxy-nrz4z" [e91c8e5c-5c73-4c3d-8c63-5790160ff221] Running
	I0213 21:58:31.998990   16933 system_pods.go:89] "kube-scheduler-addons-174699" [2a5ca6d2-d718-4499-b9f1-d2ed868866a7] Running
	I0213 21:58:31.998995   16933 system_pods.go:89] "metrics-server-69cf46c98-vbmfc" [0b8dda7d-7308-43e2-aad8-dc0aa55d063e] Running
	I0213 21:58:31.998999   16933 system_pods.go:89] "nvidia-device-plugin-daemonset-8dzqr" [792f8293-6c14-461d-8057-a6a3dd4a96a9] Running
	I0213 21:58:31.999007   16933 system_pods.go:89] "registry-nq98k" [8572b586-44d6-44d9-8930-69d52e78c935] Running
	I0213 21:58:31.999010   16933 system_pods.go:89] "registry-proxy-pgkpv" [51d7fe9d-91e2-49a6-adeb-4a9f32c4fdb1] Running
	I0213 21:58:31.999019   16933 system_pods.go:89] "snapshot-controller-58dbcc7b99-b2gfg" [5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:31.999033   16933 system_pods.go:89] "snapshot-controller-58dbcc7b99-r2blh" [795b7e91-5569-43d9-a190-834862200040] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:31.999038   16933 system_pods.go:89] "storage-provisioner" [d332938b-8b65-4d3d-a6a2-2ee6d17d577b] Running
	I0213 21:58:31.999042   16933 system_pods.go:89] "tiller-deploy-7b677967b9-xmbn9" [8b859860-6572-453a-91cc-40aa67cb3030] Running
	I0213 21:58:31.999052   16933 system_pods.go:126] duration metric: took 213.437166ms to wait for k8s-apps to be running ...
	I0213 21:58:31.999064   16933 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 21:58:31.999115   16933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 21:58:32.015753   16933 system_svc.go:56] duration metric: took 16.676926ms WaitForService to wait for kubelet.
	I0213 21:58:32.015784   16933 kubeadm.go:581] duration metric: took 42.663849977s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 21:58:32.015810   16933 node_conditions.go:102] verifying NodePressure condition ...
	I0213 21:58:32.187220   16933 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 21:58:32.187269   16933 node_conditions.go:123] node cpu capacity is 2
	I0213 21:58:32.187285   16933 node_conditions.go:105] duration metric: took 171.466898ms to run NodePressure ...
	I0213 21:58:32.187299   16933 start.go:228] waiting for startup goroutines ...
	I0213 21:58:32.202282   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:32.304483   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:32.313372   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:32.702199   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:33.197049   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:33.197491   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:33.220781   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:33.305400   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:33.315525   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:33.701515   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:33.803641   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:33.813087   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:34.201776   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:34.303327   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:34.313897   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:34.701663   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:34.803851   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:34.812812   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:35.202949   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:35.307017   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:35.313456   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:35.702060   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:35.804701   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:35.813169   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:36.202312   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:36.305551   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:36.314384   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:36.704632   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:36.804132   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:36.812652   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:37.203859   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:37.305748   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:37.314414   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:37.702479   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:37.803928   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:37.815141   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:38.202027   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:38.304383   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:38.312769   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:38.701819   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:38.804678   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:38.814066   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:39.203195   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:39.305675   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:39.313887   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:39.701753   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:39.804023   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:39.813865   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:40.201905   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:40.326035   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:40.326155   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:40.701955   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:40.803864   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:40.816183   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:41.208047   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:41.304714   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:41.314181   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:41.739918   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:41.804662   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:41.817541   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:42.201600   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:42.305513   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:42.313006   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:42.701977   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:42.804796   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:42.813881   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:43.202956   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:43.305963   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:43.314275   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:43.702082   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:43.804948   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:43.813855   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:44.202104   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:44.304766   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:44.315300   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:44.702181   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:44.806939   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:44.812552   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:45.201971   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:45.304277   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:45.313805   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:45.980070   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:45.980897   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:45.981065   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:46.202706   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:46.309887   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:46.315185   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:46.701545   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:46.804050   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:46.820692   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:47.203002   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:47.305154   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:47.315466   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:47.702285   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:47.804136   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:47.813702   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:48.202098   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:48.304166   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:48.313662   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:48.702789   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:48.809710   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:48.814578   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:49.201757   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:49.304871   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:49.315119   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:49.702049   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:49.804852   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:49.813423   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:50.201639   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:50.305538   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:50.313203   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:50.702580   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:50.837230   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:50.838725   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:51.202299   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:51.304955   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:51.317711   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:51.701721   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:51.805493   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:51.812731   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:52.202272   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:52.306014   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:52.313230   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:52.701990   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:52.806171   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:52.812878   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:53.202215   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:53.305968   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:53.313530   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:53.702465   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:53.803771   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:53.812849   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:54.202419   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:54.304531   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:54.313704   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:54.704289   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:54.803893   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:54.819863   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:55.202393   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:55.305178   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:55.313876   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:55.702383   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:55.804603   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:55.813123   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:56.201398   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:56.306769   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:56.318467   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:56.702285   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:56.803604   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:56.813834   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:57.201866   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:57.304887   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:57.315402   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:57.702053   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:57.804524   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:57.813453   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:58.202157   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:58.307337   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:58.312403   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:58.702363   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:58.811424   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:58.815406   16933 kapi.go:107] duration metric: took 55.008098904s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0213 21:58:59.202201   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:59.304751   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:59.701950   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:59.804331   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:00.203246   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:00.305147   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:00.701948   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:00.804739   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:01.202748   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:01.304836   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:01.702706   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:01.804273   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:02.202053   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:02.306869   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:02.702418   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:02.804043   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:03.202084   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:03.305696   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:03.702102   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:03.804488   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:04.202297   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:04.306259   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:04.703689   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:04.803783   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:05.201104   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:05.307646   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:05.701993   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:05.807378   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:06.203616   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:06.304398   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:06.702802   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:06.804846   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:07.202228   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:07.303393   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:07.701734   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:07.804907   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:08.202487   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:08.304444   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:08.702244   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:08.803617   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:09.201777   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:09.303694   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:09.701624   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:09.804987   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:10.203449   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:10.316865   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:10.701831   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:10.804388   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:11.201224   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:11.304555   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:11.702493   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:11.803956   16933 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:12.205040   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:12.304263   16933 kapi.go:107] duration metric: took 1m11.005214498s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0213 21:59:12.753428   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:13.206922   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:13.701880   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:14.202114   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:14.701830   16933 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:15.201983   16933 kapi.go:107] duration metric: took 1m9.004518091s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0213 21:59:15.204120   16933 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-174699 cluster.
	I0213 21:59:15.205760   16933 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0213 21:59:15.207337   16933 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0213 21:59:15.208947   16933 out.go:177] * Enabled addons: storage-provisioner, yakd, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, nvidia-device-plugin, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0213 21:59:15.210490   16933 addons.go:505] enable addons completed in 1m26.03905784s: enabled=[storage-provisioner yakd ingress-dns cloud-spanner metrics-server inspektor-gadget nvidia-device-plugin helm-tiller storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0213 21:59:15.210522   16933 start.go:233] waiting for cluster config update ...
	I0213 21:59:15.210537   16933 start.go:242] writing updated cluster config ...
	I0213 21:59:15.210765   16933 ssh_runner.go:195] Run: rm -f paused
	I0213 21:59:15.261207   16933 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 21:59:15.263380   16933 out.go:177] * Done! kubectl is now configured to use "addons-174699" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	287d21a339e51       dd1b12fcb6097       6 seconds ago        Running             hello-world-app              0                   bf026898a106d       hello-world-app-5d77478584-4ptmv
	a6f789ae7095a       2b70e4aaac6b5       16 seconds ago       Running             nginx                        0                   988ed85d13bd9       nginx
	c0ef8606089c8       3cb09943f099d       51 seconds ago       Running             headlamp                     0                   3da4002ccf6c5       headlamp-7ddfbb94ff-rkfg4
	fcabe18a4f8fb       6d2a98b274382       About a minute ago   Running             gcp-auth                     0                   bbdb7f66a36fe       gcp-auth-d4c87556c-4zpgd
	bb0c3038d1643       1ebff0f9671bc       About a minute ago   Exited              patch                        1                   85f4d0898611e       ingress-nginx-admission-patch-h5x2j
	1e652db6dc80c       1ebff0f9671bc       About a minute ago   Exited              create                       0                   7085c4e9b6d78       ingress-nginx-admission-create-zbj8t
	d916c9a851de2       aa61ee9c70bc4       About a minute ago   Exited              volume-snapshot-controller   0                   9bc43d5679c74       snapshot-controller-58dbcc7b99-r2blh
	ad27d0badf328       31de47c733c91       About a minute ago   Running             yakd                         0                   8241eac0d1598       yakd-dashboard-9947fc6bf-6mlsl
	52e3d91b26c60       1499ed4fbd0aa       2 minutes ago        Running             minikube-ingress-dns         0                   0de4213b86e6d       kube-ingress-dns-minikube
	457bcaa2ccc58       6e38f40d628db       2 minutes ago        Running             storage-provisioner          0                   892e38b5c8d3e       storage-provisioner
	9a7880a52abea       ead0a4a53df89       2 minutes ago        Running             coredns                      0                   135dfc0e29722       coredns-5dd5756b68-nrfrv
	d374e9bc77c76       83f6cc407eed8       2 minutes ago        Running             kube-proxy                   0                   4a819f3b49661       kube-proxy-nrz4z
	fd9c936bef0e2       ead0a4a53df89       2 minutes ago        Running             coredns                      0                   309305a74b648       coredns-5dd5756b68-45snv
	04c79b3421a5b       73deb9a3f7025       2 minutes ago        Running             etcd                         0                   5c0e2a042cce1       etcd-addons-174699
	54988d297bda6       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler               0                   2d51171bc81e8       kube-scheduler-addons-174699
	bea24f5d7b9e0       d058aa5ab969c       2 minutes ago        Running             kube-controller-manager      0                   77b9c2d764862       kube-controller-manager-addons-174699
	8b946fd95b7dc       7fe0e6f37db33       2 minutes ago        Running             kube-apiserver               0                   3b7d26a29c4bf       kube-apiserver-addons-174699
	
	
	==> containerd <==
	-- Journal begins at Tue 2024-02-13 21:57:00 UTC, ends at Tue 2024-02-13 22:00:27 UTC. --
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.535632202Z" level=info msg="stop pulling image gcr.io/google-samples/hello-app:1.0: active requests=0, bytes read=12772065"
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.538155436Z" level=info msg="ImageCreate event name:\"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.541492923Z" level=info msg="ImageUpdate event name:\"gcr.io/google-samples/hello-app:1.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.544525078Z" level=info msg="ImageCreate event name:\"gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.545311031Z" level=info msg="Pulled image \"gcr.io/google-samples/hello-app:1.0\" with image id \"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\", repo tag \"gcr.io/google-samples/hello-app:1.0\", repo digest \"gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7\", size \"13745365\" in 2.00569335s"
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.545370362Z" level=info msg="PullImage \"gcr.io/google-samples/hello-app:1.0\" returns image reference \"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\""
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.553187309Z" level=info msg="CreateContainer within sandbox \"bf026898a106d8f4e75b1cedc822052c6d6eec79320f585351762759082e7910\" for container &ContainerMetadata{Name:hello-world-app,Attempt:0,}"
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.571742726Z" level=info msg="CreateContainer within sandbox \"bf026898a106d8f4e75b1cedc822052c6d6eec79320f585351762759082e7910\" for &ContainerMetadata{Name:hello-world-app,Attempt:0,} returns container id \"287d21a339e51e375c2c64fa74da35c7d1adead549273bb8e468312ea59041dc\""
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.575810773Z" level=info msg="StartContainer for \"287d21a339e51e375c2c64fa74da35c7d1adead549273bb8e468312ea59041dc\""
	Feb 13 22:00:20 addons-174699 containerd[691]: time="2024-02-13T22:00:20.666446836Z" level=info msg="StartContainer for \"287d21a339e51e375c2c64fa74da35c7d1adead549273bb8e468312ea59041dc\" returns successfully"
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.133223202Z" level=info msg="Kill container \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\""
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.235801428Z" level=info msg="shim disconnected" id=bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1 namespace=k8s.io
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.235873354Z" level=warning msg="cleaning up after shim disconnected" id=bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1 namespace=k8s.io
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.235887143Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.263837908Z" level=info msg="StopContainer for \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\" returns successfully"
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.264912394Z" level=info msg="StopPodSandbox for \"d505c95498e8798b3169c3d7146b9cb39a110e28e1cb1d36b2677a10b00eac85\""
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.265111484Z" level=info msg="Container to stop \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.311646211Z" level=info msg="shim disconnected" id=d505c95498e8798b3169c3d7146b9cb39a110e28e1cb1d36b2677a10b00eac85 namespace=k8s.io
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.311749433Z" level=warning msg="cleaning up after shim disconnected" id=d505c95498e8798b3169c3d7146b9cb39a110e28e1cb1d36b2677a10b00eac85 namespace=k8s.io
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.311762675Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.411030839Z" level=info msg="TearDown network for sandbox \"d505c95498e8798b3169c3d7146b9cb39a110e28e1cb1d36b2677a10b00eac85\" successfully"
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.411170009Z" level=info msg="StopPodSandbox for \"d505c95498e8798b3169c3d7146b9cb39a110e28e1cb1d36b2677a10b00eac85\" returns successfully"
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.532576759Z" level=info msg="RemoveContainer for \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\""
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.540375796Z" level=info msg="RemoveContainer for \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\" returns successfully"
	Feb 13 22:00:22 addons-174699 containerd[691]: time="2024-02-13T22:00:22.541463485Z" level=error msg="ContainerStatus for \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\": not found"
	
	
	==> coredns [9a7880a52abea1dda3f10ac57e5b2d144e6f9109f12c744ccef943ef0285967a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51089 - 55852 "HINFO IN 2973444069724565028.5019191103910997005. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034303307s
	[INFO] 10.244.0.22:39089 - 60845 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000257486s
	[INFO] 10.244.0.22:46465 - 25861 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000090398s
	[INFO] 10.244.0.22:54703 - 49139 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000071804s
	[INFO] 10.244.0.22:40175 - 12193 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00075314s
	[INFO] 10.244.0.26:40378 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000594956s
	[INFO] 10.244.0.26:52317 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125337s
	[INFO] 10.244.0.21:35129 - 42116 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000268806s
	[INFO] 10.244.0.21:35129 - 48888 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000181976s
	[INFO] 10.244.0.21:35129 - 18807 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000127029s
	[INFO] 10.244.0.21:35129 - 3445 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000113659s
	[INFO] 10.244.0.21:35129 - 8169 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000116295s
	[INFO] 10.244.0.21:35129 - 58303 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000110495s
	[INFO] 10.244.0.21:35129 - 29417 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109491s
	[INFO] 10.244.0.21:33444 - 58680 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000150281s
	[INFO] 10.244.0.21:33444 - 17282 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000190104s
	[INFO] 10.244.0.21:33444 - 3481 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000172352s
	[INFO] 10.244.0.21:33444 - 27982 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101309s
	[INFO] 10.244.0.21:33444 - 9782 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000160498s
	[INFO] 10.244.0.21:33444 - 38421 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000289809s
	[INFO] 10.244.0.21:33444 - 4666 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000121013s
	
	
	==> coredns [fd9c936bef0e279fd4553e4f4dca4ac10a3a0eb6f4df26fa737c64c00d46c67d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42781 - 51189 "HINFO IN 8897152079905503904.508110326658467403. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.050844007s
	[INFO] 10.244.0.22:37196 - 3579 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317047s
	[INFO] 10.244.0.22:58379 - 22286 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007931s
	[INFO] 10.244.0.22:36117 - 1321 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000261513s
	[INFO] 10.244.0.22:51724 - 51495 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000581004s
	[INFO] 10.244.0.21:57845 - 31883 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00034578s
	[INFO] 10.244.0.21:57845 - 57806 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000198478s
	[INFO] 10.244.0.21:57845 - 12952 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079999s
	[INFO] 10.244.0.21:57845 - 16307 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000109517s
	[INFO] 10.244.0.21:57845 - 1636 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00013981s
	[INFO] 10.244.0.21:57845 - 65306 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078271s
	[INFO] 10.244.0.21:57845 - 10820 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077058s
	[INFO] 10.244.0.21:46126 - 19562 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000234995s
	[INFO] 10.244.0.21:46126 - 16885 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000110179s
	[INFO] 10.244.0.21:46126 - 42046 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100264s
	[INFO] 10.244.0.21:46126 - 41488 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000197634s
	[INFO] 10.244.0.21:46126 - 49946 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092152s
	[INFO] 10.244.0.21:46126 - 41278 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073477s
	[INFO] 10.244.0.21:46126 - 21362 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000146105s
	
	
	==> describe nodes <==
	Name:               addons-174699
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-174699
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=addons-174699
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T21_57_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-174699
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 21:57:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-174699
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:00:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:00:09 +0000   Tue, 13 Feb 2024 21:57:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:00:09 +0000   Tue, 13 Feb 2024 21:57:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:00:09 +0000   Tue, 13 Feb 2024 21:57:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:00:09 +0000   Tue, 13 Feb 2024 21:57:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    addons-174699
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a7e0daf9e04734ad4c0067dd2ac868
	  System UUID:                16a7e0da-f9e0-4734-ad4c-0067dd2ac868
	  Boot ID:                    8fe5a866-d90a-4267-ab62-0c370ad38873
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-4ptmv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  gcp-auth                    gcp-auth-d4c87556c-4zpgd                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  headlamp                    headlamp-7ddfbb94ff-rkfg4                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 coredns-5dd5756b68-45snv                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m38s
	  kube-system                 coredns-5dd5756b68-nrfrv                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m38s
	  kube-system                 etcd-addons-174699                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m51s
	  kube-system                 kube-apiserver-addons-174699             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 kube-controller-manager-addons-174699    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-proxy-nrz4z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-scheduler-addons-174699             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-6mlsl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             368Mi (9%!)(MISSING)  596Mi (15%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m35s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x8 over 3m)  kubelet          Node addons-174699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x8 over 3m)  kubelet          Node addons-174699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x7 over 3m)  kubelet          Node addons-174699 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m52s            kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m51s            kubelet          Node addons-174699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s            kubelet          Node addons-174699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s            kubelet          Node addons-174699 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s            kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m51s            kubelet          Node addons-174699 status is now: NodeReady
	  Normal  RegisteredNode           2m38s            node-controller  Node addons-174699 event: Registered Node addons-174699 in Controller
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.141715] systemd-fstab-generator[558]: Ignoring "noauto" for root device
	[  +0.105109] systemd-fstab-generator[569]: Ignoring "noauto" for root device
	[  +0.152935] systemd-fstab-generator[582]: Ignoring "noauto" for root device
	[  +0.118776] systemd-fstab-generator[593]: Ignoring "noauto" for root device
	[  +0.225442] systemd-fstab-generator[621]: Ignoring "noauto" for root device
	[  +6.790394] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +4.503980] systemd-fstab-generator[845]: Ignoring "noauto" for root device
	[  +8.765825] systemd-fstab-generator[1205]: Ignoring "noauto" for root device
	[ +20.056013] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 21:58] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.185307] kauditd_printk_skb: 35 callbacks suppressed
	[ +15.377967] kauditd_printk_skb: 4 callbacks suppressed
	[ +26.311941] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.216791] kauditd_printk_skb: 28 callbacks suppressed
	[Feb13 21:59] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.634240] kauditd_printk_skb: 29 callbacks suppressed
	[ +10.949023] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.261439] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.010439] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.283271] kauditd_printk_skb: 24 callbacks suppressed
	[Feb13 22:00] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.243028] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.061560] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [04c79b3421a5b9536a0b2b9a5050d45d778df8e684d9013d6f7f835db299d984] <==
	{"level":"warn","ts":"2024-02-13T21:58:45.963826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.944851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2024-02-13T21:58:45.965608Z","caller":"traceutil/trace.go:171","msg":"trace[1587642045] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1021; }","duration":"268.56484ms","start":"2024-02-13T21:58:45.696865Z","end":"2024-02-13T21:58:45.96543Z","steps":["trace[1587642045] 'range keys from in-memory index tree'  (duration: 266.80361ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:58:45.963201Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.986097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-13T21:58:45.966323Z","caller":"traceutil/trace.go:171","msg":"trace[625313603] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:0; response_revision:1021; }","duration":"187.112157ms","start":"2024-02-13T21:58:45.779203Z","end":"2024-02-13T21:58:45.966316Z","steps":["trace[625313603] 'range keys from in-memory index tree'  (duration: 183.925679ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:12.740821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.82732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-13T21:59:12.740952Z","caller":"traceutil/trace.go:171","msg":"trace[448231323] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1179; }","duration":"380.165625ms","start":"2024-02-13T21:59:12.360768Z","end":"2024-02-13T21:59:12.740934Z","steps":["trace[448231323] 'range keys from in-memory index tree'  (duration: 374.669278ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:12.740989Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:59:12.360701Z","time spent":"380.276807ms","remote":"127.0.0.1:48704","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":28,"request content":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" "}
	{"level":"warn","ts":"2024-02-13T21:59:12.74133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.705299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-02-13T21:59:12.741405Z","caller":"traceutil/trace.go:171","msg":"trace[1414333910] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1179; }","duration":"363.906409ms","start":"2024-02-13T21:59:12.377489Z","end":"2024-02-13T21:59:12.741395Z","steps":["trace[1414333910] 'range keys from in-memory index tree'  (duration: 363.61292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:12.741428Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:59:12.377476Z","time spent":"363.944905ms","remote":"127.0.0.1:48704","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-02-13T21:59:12.741612Z","caller":"traceutil/trace.go:171","msg":"trace[1193350241] linearizableReadLoop","detail":"{readStateIndex:1214; appliedIndex:1213; }","duration":"179.774834ms","start":"2024-02-13T21:59:12.561829Z","end":"2024-02-13T21:59:12.741604Z","steps":["trace[1193350241] 'read index received'  (duration: 178.664639ms)","trace[1193350241] 'applied index is now lower than readState.Index'  (duration: 1.109689ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-13T21:59:12.741749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.946794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-02-13T21:59:12.741782Z","caller":"traceutil/trace.go:171","msg":"trace[249894899] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1180; }","duration":"179.980749ms","start":"2024-02-13T21:59:12.561793Z","end":"2024-02-13T21:59:12.741774Z","steps":["trace[249894899] 'agreement among raft nodes before linearized reading'  (duration: 179.873971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:12.74179Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.796526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-02-13T21:59:12.741808Z","caller":"traceutil/trace.go:171","msg":"trace[1521666206] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1180; }","duration":"152.83558ms","start":"2024-02-13T21:59:12.588967Z","end":"2024-02-13T21:59:12.741803Z","steps":["trace[1521666206] 'agreement among raft nodes before linearized reading'  (duration: 152.732714ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:12.741874Z","caller":"traceutil/trace.go:171","msg":"trace[1665599835] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"254.342222ms","start":"2024-02-13T21:59:12.487521Z","end":"2024-02-13T21:59:12.741863Z","steps":["trace[1665599835] 'process raft request'  (duration: 253.008012ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:35.955767Z","caller":"traceutil/trace.go:171","msg":"trace[1761005306] linearizableReadLoop","detail":"{readStateIndex:1468; appliedIndex:1467; }","duration":"386.982803ms","start":"2024-02-13T21:59:35.568768Z","end":"2024-02-13T21:59:35.955751Z","steps":["trace[1761005306] 'read index received'  (duration: 386.605285ms)","trace[1761005306] 'applied index is now lower than readState.Index'  (duration: 376.522µs)"],"step_count":2}
	{"level":"info","ts":"2024-02-13T21:59:35.958253Z","caller":"traceutil/trace.go:171","msg":"trace[1461161210] transaction","detail":"{read_only:false; response_revision:1422; number_of_response:1; }","duration":"402.505217ms","start":"2024-02-13T21:59:35.555528Z","end":"2024-02-13T21:59:35.958033Z","steps":["trace[1461161210] 'process raft request'  (duration: 399.856177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:35.958679Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:59:35.555511Z","time spent":"402.843101ms","remote":"127.0.0.1:48658","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":698,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/task-pv-pod.17b38b0de862bf29\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/task-pv-pod.17b38b0de862bf29\" value_size:627 lease:1298881280097650055 >> failure:<>"}
	{"level":"warn","ts":"2024-02-13T21:59:35.958891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.133398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3752"}
	{"level":"warn","ts":"2024-02-13T21:59:35.95931Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.555389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2965"}
	{"level":"info","ts":"2024-02-13T21:59:35.959672Z","caller":"traceutil/trace.go:171","msg":"trace[570894784] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1422; }","duration":"331.920764ms","start":"2024-02-13T21:59:35.627742Z","end":"2024-02-13T21:59:35.959663Z","steps":["trace[570894784] 'agreement among raft nodes before linearized reading'  (duration: 331.526607ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:35.959791Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:59:35.627729Z","time spent":"332.04863ms","remote":"127.0.0.1:48684","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":2988,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-02-13T21:59:35.959016Z","caller":"traceutil/trace.go:171","msg":"trace[1022102460] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1422; }","duration":"390.258734ms","start":"2024-02-13T21:59:35.568744Z","end":"2024-02-13T21:59:35.959003Z","steps":["trace[1022102460] 'agreement among raft nodes before linearized reading'  (duration: 389.430203ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:35.960225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:59:35.568731Z","time spent":"391.45635ms","remote":"127.0.0.1:48684","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3775,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	
	
	==> gcp-auth [fcabe18a4f8fbbba29d8b83620f4a6b3271fc02c45645d921ef30f3906b2384a] <==
	2024/02/13 21:59:14 GCP Auth Webhook started!
	2024/02/13 21:59:15 Ready to marshal response ...
	2024/02/13 21:59:15 Ready to write response ...
	2024/02/13 21:59:15 Ready to marshal response ...
	2024/02/13 21:59:15 Ready to write response ...
	2024/02/13 21:59:26 Ready to marshal response ...
	2024/02/13 21:59:26 Ready to write response ...
	2024/02/13 21:59:26 Ready to marshal response ...
	2024/02/13 21:59:26 Ready to write response ...
	2024/02/13 21:59:29 Ready to marshal response ...
	2024/02/13 21:59:29 Ready to write response ...
	2024/02/13 21:59:29 Ready to marshal response ...
	2024/02/13 21:59:29 Ready to write response ...
	2024/02/13 21:59:29 Ready to marshal response ...
	2024/02/13 21:59:29 Ready to write response ...
	2024/02/13 21:59:34 Ready to marshal response ...
	2024/02/13 21:59:34 Ready to write response ...
	2024/02/13 21:59:44 Ready to marshal response ...
	2024/02/13 21:59:44 Ready to write response ...
	2024/02/13 21:59:58 Ready to marshal response ...
	2024/02/13 21:59:58 Ready to write response ...
	2024/02/13 22:00:00 Ready to marshal response ...
	2024/02/13 22:00:00 Ready to write response ...
	2024/02/13 22:00:17 Ready to marshal response ...
	2024/02/13 22:00:17 Ready to write response ...
	
	
	==> kernel <==
	 22:00:27 up 3 min,  0 users,  load average: 1.99, 1.62, 0.69
	Linux addons-174699 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8b946fd95b7dc49fe98ce01d03f0dc6ad61dfda2e2f5eb405aad902b25075c62] <==
	I0213 21:59:55.737954       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0213 21:59:56.757187       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0213 21:59:58.057434       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.71:8443->10.244.0.29:54472: read: connection reset by peer
	I0213 22:00:00.474322       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0213 22:00:00.681197       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.54.191"}
	I0213 22:00:17.279171       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.219.83"}
	I0213 22:00:17.444862       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:00:17.444900       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:00:17.468788       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:00:17.468826       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:00:17.492798       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:00:17.492830       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:00:17.532312       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:00:17.532345       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:00:17.559027       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:00:17.567804       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:00:17.597795       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:00:17.597824       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:00:17.607636       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:00:17.607666       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:00:17.629922       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:00:17.630006       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0213 22:00:18.493635       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0213 22:00:18.598138       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0213 22:00:18.671398       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [bea24f5d7b9e0815b905b46cfd2cab971175bc5fe9e410ef3ec69c3e43133291] <==
	I0213 22:00:19.078914       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0213 22:00:19.088808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="5.116µs"
	I0213 22:00:19.093531       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0213 22:00:19.243458       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0213 22:00:19.243554       1 shared_informer.go:318] Caches are synced for resource quota
	I0213 22:00:19.590442       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0213 22:00:19.590477       1 shared_informer.go:318] Caches are synced for garbage collector
	W0213 22:00:19.985274       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:00:19.985357       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:00:20.008798       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:00:20.008825       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:00:20.034430       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:00:20.034458       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0213 22:00:21.547915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.108642ms"
	I0213 22:00:21.548151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="104.618µs"
	W0213 22:00:21.699617       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:00:21.699770       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:00:21.790328       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:00:21.790433       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:00:22.862280       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:00:22.862334       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:00:26.825916       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:00:26.825979       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:00:27.326354       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:00:27.326515       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [d374e9bc77c767f4b1ff4764b403be1c0915344d215d2c083dd3b01cf158cc62] <==
	I0213 21:57:52.314852       1 server_others.go:69] "Using iptables proxy"
	I0213 21:57:52.341337       1 node.go:141] Successfully retrieved node IP: 192.168.39.71
	I0213 21:57:52.439108       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 21:57:52.439152       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 21:57:52.455186       1 server_others.go:152] "Using iptables Proxier"
	I0213 21:57:52.455313       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 21:57:52.455533       1 server.go:846] "Version info" version="v1.28.4"
	I0213 21:57:52.455568       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 21:57:52.456818       1 config.go:188] "Starting service config controller"
	I0213 21:57:52.456832       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 21:57:52.456849       1 config.go:97] "Starting endpoint slice config controller"
	I0213 21:57:52.456852       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 21:57:52.457294       1 config.go:315] "Starting node config controller"
	I0213 21:57:52.457301       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 21:57:52.557953       1 shared_informer.go:318] Caches are synced for node config
	I0213 21:57:52.558004       1 shared_informer.go:318] Caches are synced for service config
	I0213 21:57:52.558142       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [54988d297bda6eaa6921ce686ad484664c0fc991dbae4bf848e35456d1e7c107] <==
	E0213 21:57:32.826902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:32.827001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 21:57:33.638245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 21:57:33.638298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 21:57:33.685167       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 21:57:33.685217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 21:57:33.789828       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 21:57:33.789945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 21:57:33.803822       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 21:57:33.803871       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 21:57:33.937622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:33.937686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0213 21:57:33.939602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 21:57:33.939654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 21:57:33.963956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:33.964018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0213 21:57:33.963968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 21:57:33.964091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 21:57:34.012827       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 21:57:34.013018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 21:57:34.048212       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 21:57:34.048325       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 21:57:34.071887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 21:57:34.071951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0213 21:57:37.005363       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 21:57:00 UTC, ends at Tue 2024-02-13 22:00:27 UTC. --
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.492792    1212 scope.go:117] "RemoveContainer" containerID="90df8b471379290aecf7d2b73fbe436fef64a134eff693865969e94a1c2af366"
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.518175    1212 scope.go:117] "RemoveContainer" containerID="90df8b471379290aecf7d2b73fbe436fef64a134eff693865969e94a1c2af366"
	Feb 13 22:00:18 addons-174699 kubelet[1212]: E0213 22:00:18.519034    1212 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90df8b471379290aecf7d2b73fbe436fef64a134eff693865969e94a1c2af366\": not found" containerID="90df8b471379290aecf7d2b73fbe436fef64a134eff693865969e94a1c2af366"
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.519121    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90df8b471379290aecf7d2b73fbe436fef64a134eff693865969e94a1c2af366"} err="failed to get container status \"90df8b471379290aecf7d2b73fbe436fef64a134eff693865969e94a1c2af366\": rpc error: code = NotFound desc = an error occurred when try to find container \"90df8b471379290aecf7d2b73fbe436fef64a134eff693865969e94a1c2af366\": not found"
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.560780    1212 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjqqt\" (UniqueName: \"kubernetes.io/projected/5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d-kube-api-access-zjqqt\") pod \"5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d\" (UID: \"5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d\") "
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.567867    1212 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d-kube-api-access-zjqqt" (OuterVolumeSpecName: "kube-api-access-zjqqt") pod "5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d" (UID: "5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d"). InnerVolumeSpecName "kube-api-access-zjqqt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.661747    1212 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9885n\" (UniqueName: \"kubernetes.io/projected/795b7e91-5569-43d9-a190-834862200040-kube-api-access-9885n\") pod \"795b7e91-5569-43d9-a190-834862200040\" (UID: \"795b7e91-5569-43d9-a190-834862200040\") "
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.661838    1212 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zjqqt\" (UniqueName: \"kubernetes.io/projected/5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d-kube-api-access-zjqqt\") on node \"addons-174699\" DevicePath \"\""
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.664340    1212 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795b7e91-5569-43d9-a190-834862200040-kube-api-access-9885n" (OuterVolumeSpecName: "kube-api-access-9885n") pod "795b7e91-5569-43d9-a190-834862200040" (UID: "795b7e91-5569-43d9-a190-834862200040"). InnerVolumeSpecName "kube-api-access-9885n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 22:00:18 addons-174699 kubelet[1212]: I0213 22:00:18.763214    1212 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9885n\" (UniqueName: \"kubernetes.io/projected/795b7e91-5569-43d9-a190-834862200040-kube-api-access-9885n\") on node \"addons-174699\" DevicePath \"\""
	Feb 13 22:00:20 addons-174699 kubelet[1212]: I0213 22:00:20.042302    1212 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5a6d03ac-bcad-4738-a850-878c113ca9f0" path="/var/lib/kubelet/pods/5a6d03ac-bcad-4738-a850-878c113ca9f0/volumes"
	Feb 13 22:00:20 addons-174699 kubelet[1212]: I0213 22:00:20.045265    1212 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d" path="/var/lib/kubelet/pods/5ebf9edc-9ad6-430e-b7f1-ebf63ce3540d/volumes"
	Feb 13 22:00:20 addons-174699 kubelet[1212]: I0213 22:00:20.048892    1212 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="795b7e91-5569-43d9-a190-834862200040" path="/var/lib/kubelet/pods/795b7e91-5569-43d9-a190-834862200040/volumes"
	Feb 13 22:00:20 addons-174699 kubelet[1212]: I0213 22:00:20.050357    1212 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a9f68bb5-a1de-46fb-95bc-f5c05c3a6d53" path="/var/lib/kubelet/pods/a9f68bb5-a1de-46fb-95bc-f5c05c3a6d53/volumes"
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.490990    1212 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62c88419-e095-4301-98b1-f60a3a049fd8-webhook-cert\") pod \"62c88419-e095-4301-98b1-f60a3a049fd8\" (UID: \"62c88419-e095-4301-98b1-f60a3a049fd8\") "
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.491566    1212 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rftn\" (UniqueName: \"kubernetes.io/projected/62c88419-e095-4301-98b1-f60a3a049fd8-kube-api-access-5rftn\") pod \"62c88419-e095-4301-98b1-f60a3a049fd8\" (UID: \"62c88419-e095-4301-98b1-f60a3a049fd8\") "
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.495773    1212 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c88419-e095-4301-98b1-f60a3a049fd8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "62c88419-e095-4301-98b1-f60a3a049fd8" (UID: "62c88419-e095-4301-98b1-f60a3a049fd8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.498130    1212 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c88419-e095-4301-98b1-f60a3a049fd8-kube-api-access-5rftn" (OuterVolumeSpecName: "kube-api-access-5rftn") pod "62c88419-e095-4301-98b1-f60a3a049fd8" (UID: "62c88419-e095-4301-98b1-f60a3a049fd8"). InnerVolumeSpecName "kube-api-access-5rftn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.529323    1212 scope.go:117] "RemoveContainer" containerID="bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1"
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.541120    1212 scope.go:117] "RemoveContainer" containerID="bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1"
	Feb 13 22:00:22 addons-174699 kubelet[1212]: E0213 22:00:22.541744    1212 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\": not found" containerID="bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1"
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.541807    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1"} err="failed to get container status \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb452b7dd108a0c41a8d7d2d965fe1ec6cc9f69c74bfcec96284431e90672de1\": not found"
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.591938    1212 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5rftn\" (UniqueName: \"kubernetes.io/projected/62c88419-e095-4301-98b1-f60a3a049fd8-kube-api-access-5rftn\") on node \"addons-174699\" DevicePath \"\""
	Feb 13 22:00:22 addons-174699 kubelet[1212]: I0213 22:00:22.592001    1212 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62c88419-e095-4301-98b1-f60a3a049fd8-webhook-cert\") on node \"addons-174699\" DevicePath \"\""
	Feb 13 22:00:24 addons-174699 kubelet[1212]: I0213 22:00:24.035666    1212 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="62c88419-e095-4301-98b1-f60a3a049fd8" path="/var/lib/kubelet/pods/62c88419-e095-4301-98b1-f60a3a049fd8/volumes"
	
	
	==> storage-provisioner [457bcaa2ccc58ebce1609b6e12ee72ab343c99b4a419b7012ef4c560006a2ed7] <==
	I0213 21:57:56.779449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 21:57:56.813384       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 21:57:56.813509       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 21:57:56.838694       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 21:57:56.853695       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ff3ae67-469e-46a1-b72c-83a05cd7794d", APIVersion:"v1", ResourceVersion:"531", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-174699_f7663601-3d2e-4bc5-8a15-938fc410ebc2 became leader
	I0213 21:57:56.853761       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-174699_f7663601-3d2e-4bc5-8a15-938fc410ebc2!
	I0213 21:57:56.954274       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-174699_f7663601-3d2e-4bc5-8a15-938fc410ebc2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-174699 -n addons-174699
helpers_test.go:261: (dbg) Run:  kubectl --context addons-174699 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (28.25s)

                                                
                                    

Test pass (278/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.73
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 5.7
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 12.54
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 95.77
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 148.1
38 TestAddons/parallel/Registry 16.99
40 TestAddons/parallel/InspektorGadget 11.98
41 TestAddons/parallel/MetricsServer 6.15
42 TestAddons/parallel/HelmTiller 21.72
44 TestAddons/parallel/CSI 62.64
45 TestAddons/parallel/Headlamp 15.94
46 TestAddons/parallel/CloudSpanner 6.36
47 TestAddons/parallel/LocalPath 54.96
48 TestAddons/parallel/NvidiaDevicePlugin 5.61
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 92.54
54 TestCertOptions 97.27
55 TestCertExpiration 321.53
57 TestForceSystemdFlag 128.3
58 TestForceSystemdEnv 50.27
60 TestKVMDriverInstallOrUpdate 3.39
64 TestErrorSpam/setup 48.82
65 TestErrorSpam/start 0.39
66 TestErrorSpam/status 0.77
67 TestErrorSpam/pause 1.52
68 TestErrorSpam/unpause 1.68
69 TestErrorSpam/stop 1.49
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 63.78
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.95
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.99
81 TestFunctional/serial/CacheCmd/cache/add_local 1.69
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 50.21
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.54
92 TestFunctional/serial/LogsFileCmd 1.5
93 TestFunctional/serial/InvalidService 3.58
95 TestFunctional/parallel/ConfigCmd 0.43
96 TestFunctional/parallel/DashboardCmd 25.99
97 TestFunctional/parallel/DryRun 0.32
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.98
103 TestFunctional/parallel/ServiceCmdConnect 12.49
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 45.79
107 TestFunctional/parallel/SSHCmd 0.45
108 TestFunctional/parallel/CpCmd 1.43
109 TestFunctional/parallel/MySQL 26.91
110 TestFunctional/parallel/FileSync 0.25
111 TestFunctional/parallel/CertSync 1.41
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
119 TestFunctional/parallel/License 0.24
129 TestFunctional/parallel/ServiceCmd/DeployApp 13.18
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
131 TestFunctional/parallel/ProfileCmd/profile_list 0.26
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
133 TestFunctional/parallel/MountCmd/any-port 9.45
134 TestFunctional/parallel/MountCmd/specific-port 1.7
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.41
136 TestFunctional/parallel/ServiceCmd/List 0.36
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
139 TestFunctional/parallel/ServiceCmd/Format 0.36
140 TestFunctional/parallel/ServiceCmd/URL 0.33
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
144 TestFunctional/parallel/Version/short 0.06
145 TestFunctional/parallel/Version/components 0.51
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
150 TestFunctional/parallel/ImageCommands/ImageBuild 3.15
151 TestFunctional/parallel/ImageCommands/Setup 0.9
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.4
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.07
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.44
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.31
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.83
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.84
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestIngressAddonLegacy/StartLegacyK8sCluster 106.27
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.48
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 35.73
172 TestJSONOutput/start/Command 68.31
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.65
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.64
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.11
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.21
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 98.94
204 TestMountStart/serial/StartWithMountFirst 29.03
205 TestMountStart/serial/VerifyMountFirst 0.39
206 TestMountStart/serial/StartWithMountSecond 28.98
207 TestMountStart/serial/VerifyMountSecond 0.41
208 TestMountStart/serial/DeleteFirst 0.66
209 TestMountStart/serial/VerifyMountPostDelete 0.42
210 TestMountStart/serial/Stop 1.5
211 TestMountStart/serial/RestartStopped 23.58
212 TestMountStart/serial/VerifyMountPostStop 0.41
215 TestMultiNode/serial/FreshStart2Nodes 110.88
216 TestMultiNode/serial/DeployApp2Nodes 4.31
217 TestMultiNode/serial/PingHostFrom2Pods 0.89
218 TestMultiNode/serial/AddNode 43.68
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.21
221 TestMultiNode/serial/CopyFile 7.6
222 TestMultiNode/serial/StopNode 2.26
223 TestMultiNode/serial/StartAfterStop 27.69
224 TestMultiNode/serial/RestartKeepsNodes 312.39
225 TestMultiNode/serial/DeleteNode 1.79
226 TestMultiNode/serial/StopMultiNode 183.66
227 TestMultiNode/serial/RestartMultiNode 88.95
228 TestMultiNode/serial/ValidateNameConflict 47.76
233 TestPreload 238.42
235 TestScheduledStopUnix 119.52
239 TestRunningBinaryUpgrade 160.06
241 TestKubernetesUpgrade 177.82
245 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
248 TestNoKubernetes/serial/StartWithK8s 107.29
253 TestNetworkPlugins/group/false 3.32
257 TestNoKubernetes/serial/StartWithStopK8s 82.17
258 TestNoKubernetes/serial/Start 31.96
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
260 TestNoKubernetes/serial/ProfileList 1.35
261 TestNoKubernetes/serial/Stop 1.34
262 TestNoKubernetes/serial/StartNoArgs 41.1
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
264 TestStoppedBinaryUpgrade/Setup 0.85
265 TestStoppedBinaryUpgrade/Upgrade 149.15
274 TestPause/serial/Start 114.35
275 TestNetworkPlugins/group/auto/Start 103.28
276 TestNetworkPlugins/group/kindnet/Start 95.75
277 TestPause/serial/SecondStartNoReconfiguration 11.13
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
279 TestNetworkPlugins/group/calico/Start 96.81
280 TestPause/serial/Pause 1.09
281 TestPause/serial/VerifyStatus 0.35
282 TestPause/serial/Unpause 0.86
283 TestPause/serial/PauseAgain 0.91
284 TestPause/serial/DeletePaused 1.32
285 TestPause/serial/VerifyDeletedResources 0.54
286 TestNetworkPlugins/group/custom-flannel/Start 97.02
287 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
288 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
289 TestNetworkPlugins/group/kindnet/NetCatPod 12.24
290 TestNetworkPlugins/group/auto/KubeletFlags 0.24
291 TestNetworkPlugins/group/auto/NetCatPod 12.29
292 TestNetworkPlugins/group/kindnet/DNS 0.24
293 TestNetworkPlugins/group/kindnet/Localhost 0.2
294 TestNetworkPlugins/group/kindnet/HairPin 0.18
295 TestNetworkPlugins/group/auto/DNS 0.29
296 TestNetworkPlugins/group/auto/Localhost 0.22
297 TestNetworkPlugins/group/auto/HairPin 0.17
298 TestNetworkPlugins/group/enable-default-cni/Start 71.63
299 TestNetworkPlugins/group/flannel/Start 121.81
300 TestNetworkPlugins/group/calico/ControllerPod 6.01
301 TestNetworkPlugins/group/calico/KubeletFlags 0.22
302 TestNetworkPlugins/group/calico/NetCatPod 9.27
303 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
304 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
305 TestNetworkPlugins/group/calico/DNS 0.18
306 TestNetworkPlugins/group/calico/Localhost 0.16
307 TestNetworkPlugins/group/calico/HairPin 0.19
308 TestNetworkPlugins/group/custom-flannel/DNS 0.25
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
311 TestNetworkPlugins/group/bridge/Start 88.49
313 TestStartStop/group/old-k8s-version/serial/FirstStart 180.2
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
320 TestStartStop/group/no-preload/serial/FirstStart 103.86
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
323 TestNetworkPlugins/group/flannel/NetCatPod 10.51
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
325 TestNetworkPlugins/group/bridge/NetCatPod 10.27
326 TestNetworkPlugins/group/flannel/DNS 0.18
327 TestNetworkPlugins/group/flannel/Localhost 0.15
328 TestNetworkPlugins/group/flannel/HairPin 0.14
329 TestNetworkPlugins/group/bridge/DNS 0.18
330 TestNetworkPlugins/group/bridge/Localhost 0.15
331 TestNetworkPlugins/group/bridge/HairPin 0.17
333 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 105.31
335 TestStartStop/group/newest-cni/serial/FirstStart 84.46
336 TestStartStop/group/no-preload/serial/DeployApp 7.32
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
338 TestStartStop/group/no-preload/serial/Stop 91.92
339 TestStartStop/group/old-k8s-version/serial/DeployApp 8.45
340 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.99
341 TestStartStop/group/old-k8s-version/serial/Stop 91.99
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
344 TestStartStop/group/newest-cni/serial/Stop 2.12
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
346 TestStartStop/group/newest-cni/serial/SecondStart 45.48
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.28
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
353 TestStartStop/group/newest-cni/serial/Pause 2.4
355 TestStartStop/group/embed-certs/serial/FirstStart 105.92
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
357 TestStartStop/group/no-preload/serial/SecondStart 359.22
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
359 TestStartStop/group/old-k8s-version/serial/SecondStart 179.19
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 337.93
362 TestStartStop/group/embed-certs/serial/DeployApp 7.37
363 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
364 TestStartStop/group/embed-certs/serial/Stop 92.3
365 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
366 TestStartStop/group/embed-certs/serial/SecondStart 581.15
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
370 TestStartStop/group/old-k8s-version/serial/Pause 2.6
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
374 TestStartStop/group/no-preload/serial/Pause 2.73
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.57
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/embed-certs/serial/Pause 2.53
x
+
TestDownloadOnly/v1.16.0/json-events (13.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-531817 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-531817 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (13.728630798s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-531817
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-531817: exit status 85 (72.777921ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-531817 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |          |
	|         | -p download-only-531817        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:13
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:13.165601   16174 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:13.165880   16174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:13.165890   16174 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:13.165895   16174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:13.166104   16174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	W0213 21:56:13.166234   16174 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18171-8975/.minikube/config/config.json: open /home/jenkins/minikube-integration/18171-8975/.minikube/config/config.json: no such file or directory
	I0213 21:56:13.166819   16174 out.go:298] Setting JSON to true
	I0213 21:56:13.167746   16174 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":2321,"bootTime":1707859053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:13.167811   16174 start.go:138] virtualization: kvm guest
	I0213 21:56:13.170262   16174 out.go:97] [download-only-531817] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:13.171787   16174 out.go:169] MINIKUBE_LOCATION=18171
	W0213 21:56:13.170386   16174 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball: no such file or directory
	I0213 21:56:13.170474   16174 notify.go:220] Checking for updates...
	I0213 21:56:13.175115   16174 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:13.176607   16174 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	I0213 21:56:13.177959   16174 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 21:56:13.179370   16174 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0213 21:56:13.182424   16174 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 21:56:13.182715   16174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 21:56:13.674684   16174 out.go:97] Using the kvm2 driver based on user configuration
	I0213 21:56:13.674721   16174 start.go:298] selected driver: kvm2
	I0213 21:56:13.674727   16174 start.go:902] validating driver "kvm2" against <nil>
	I0213 21:56:13.675048   16174 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:13.675163   16174 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 21:56:13.689918   16174 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 21:56:13.690012   16174 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 21:56:13.690504   16174 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0213 21:56:13.690669   16174 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 21:56:13.690726   16174 cni.go:84] Creating CNI manager for ""
	I0213 21:56:13.690739   16174 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0213 21:56:13.690750   16174 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 21:56:13.690756   16174 start_flags.go:321] config:
	{Name:download-only-531817 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-531817 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:56:13.690954   16174 iso.go:125] acquiring lock: {Name:mke99a7249501a63f2cf8fb971ea34ada8b7e341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:13.693062   16174 out.go:97] Downloading VM boot image ...
	I0213 21:56:13.693098   16174 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18171-8975/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0213 21:56:16.199946   16174 out.go:97] Starting control plane node download-only-531817 in cluster download-only-531817
	I0213 21:56:16.199981   16174 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0213 21:56:16.247497   16174 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0213 21:56:16.247542   16174 cache.go:56] Caching tarball of preloaded images
	I0213 21:56:16.247744   16174 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0213 21:56:16.249912   16174 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0213 21:56:16.249938   16174 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0213 21:56:16.340163   16174 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-531817"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-531817
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-860514 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-860514 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.701306572s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-860514
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-860514: exit status 85 (73.809336ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-531817 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-531817        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-531817        | download-only-531817 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | -o=json --download-only        | download-only-860514 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-860514        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:27
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:27.243831   16368 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:27.244083   16368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:27.244092   16368 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:27.244096   16368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:27.244307   16368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 21:56:27.244875   16368 out.go:298] Setting JSON to true
	I0213 21:56:27.245672   16368 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":2335,"bootTime":1707859053,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:27.245729   16368 start.go:138] virtualization: kvm guest
	I0213 21:56:27.248237   16368 out.go:97] [download-only-860514] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:27.249909   16368 out.go:169] MINIKUBE_LOCATION=18171
	I0213 21:56:27.248386   16368 notify.go:220] Checking for updates...
	I0213 21:56:27.252765   16368 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:27.254260   16368 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	I0213 21:56:27.255921   16368 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 21:56:27.257499   16368 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0213 21:56:27.260147   16368 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 21:56:27.260450   16368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 21:56:27.293489   16368 out.go:97] Using the kvm2 driver based on user configuration
	I0213 21:56:27.293510   16368 start.go:298] selected driver: kvm2
	I0213 21:56:27.293515   16368 start.go:902] validating driver "kvm2" against <nil>
	I0213 21:56:27.293804   16368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:27.293883   16368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 21:56:27.308445   16368 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 21:56:27.308514   16368 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 21:56:27.308958   16368 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0213 21:56:27.309085   16368 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 21:56:27.309131   16368 cni.go:84] Creating CNI manager for ""
	I0213 21:56:27.309147   16368 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0213 21:56:27.309156   16368 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 21:56:27.309165   16368 start_flags.go:321] config:
	{Name:download-only-860514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-860514 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:56:27.309287   16368 iso.go:125] acquiring lock: {Name:mke99a7249501a63f2cf8fb971ea34ada8b7e341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:27.311240   16368 out.go:97] Starting control plane node download-only-860514 in cluster download-only-860514
	I0213 21:56:27.311251   16368 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0213 21:56:27.339908   16368 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0213 21:56:27.339934   16368 cache.go:56] Caching tarball of preloaded images
	I0213 21:56:27.340081   16368 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0213 21:56:27.342169   16368 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0213 21:56:27.342185   16368 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0213 21:56:27.373210   16368 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0213 21:56:31.280309   16368 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0213 21:56:31.280429   16368 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0213 21:56:32.213907   16368 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0213 21:56:32.214253   16368 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/download-only-860514/config.json ...
	I0213 21:56:32.214282   16368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/download-only-860514/config.json: {Name:mk15300dff5fe429a5ddbeb83dd1ddfd39d73b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:56:32.214431   16368 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0213 21:56:32.214543   16368 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18171-8975/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-860514"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-860514
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (12.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-579598 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-579598 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (12.536392332s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (12.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-579598
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-579598: exit status 85 (74.166233ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-531817 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-531817           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-531817           | download-only-531817 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | -o=json --download-only           | download-only-860514 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-860514           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-860514           | download-only-860514 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | -o=json --download-only           | download-only-579598 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-579598           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:33
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:33.296743   16523 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:33.296841   16523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:33.296847   16523 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:33.296852   16523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:33.297059   16523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 21:56:33.297659   16523 out.go:298] Setting JSON to true
	I0213 21:56:33.298473   16523 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":2341,"bootTime":1707859053,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:33.298528   16523 start.go:138] virtualization: kvm guest
	I0213 21:56:33.301014   16523 out.go:97] [download-only-579598] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:33.302645   16523 out.go:169] MINIKUBE_LOCATION=18171
	I0213 21:56:33.301334   16523 notify.go:220] Checking for updates...
	I0213 21:56:33.306199   16523 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:33.307932   16523 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	I0213 21:56:33.309622   16523 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 21:56:33.311499   16523 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0213 21:56:33.314687   16523 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 21:56:33.315035   16523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 21:56:33.347335   16523 out.go:97] Using the kvm2 driver based on user configuration
	I0213 21:56:33.347357   16523 start.go:298] selected driver: kvm2
	I0213 21:56:33.347362   16523 start.go:902] validating driver "kvm2" against <nil>
	I0213 21:56:33.347654   16523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:33.347727   16523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 21:56:33.361886   16523 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 21:56:33.361959   16523 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 21:56:33.362625   16523 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0213 21:56:33.362806   16523 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 21:56:33.362918   16523 cni.go:84] Creating CNI manager for ""
	I0213 21:56:33.362939   16523 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0213 21:56:33.362957   16523 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 21:56:33.362968   16523 start_flags.go:321] config:
	{Name:download-only-579598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-579598 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:56:33.363137   16523 iso.go:125] acquiring lock: {Name:mke99a7249501a63f2cf8fb971ea34ada8b7e341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:33.365134   16523 out.go:97] Starting control plane node download-only-579598 in cluster download-only-579598
	I0213 21:56:33.365153   16523 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0213 21:56:33.398697   16523 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0213 21:56:33.398720   16523 cache.go:56] Caching tarball of preloaded images
	I0213 21:56:33.398873   16523 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0213 21:56:33.400825   16523 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0213 21:56:33.400851   16523 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0213 21:56:33.454402   16523 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0213 21:56:37.534583   16523 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0213 21:56:37.534681   16523 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18171-8975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0213 21:56:38.349168   16523 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0213 21:56:38.349495   16523 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/download-only-579598/config.json ...
	I0213 21:56:38.349522   16523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/download-only-579598/config.json: {Name:mk507dd87865af971f7874af8198f8f4740498eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:56:38.349685   16523 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0213 21:56:38.349815   16523 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18171-8975/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-579598"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-579598
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-813907 --alsologtostderr --binary-mirror http://127.0.0.1:36691 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-813907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-813907
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (95.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-654013 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-654013 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m34.763063632s)
helpers_test.go:175: Cleaning up "offline-containerd-654013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-654013
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-654013: (1.005431667s)
--- PASS: TestOffline (95.77s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-174699
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-174699: exit status 85 (72.773536ms)

                                                
                                                
-- stdout --
	* Profile "addons-174699" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-174699"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-174699
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-174699: exit status 85 (70.352465ms)

                                                
                                                
-- stdout --
	* Profile "addons-174699" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-174699"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (148.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-174699 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-174699 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.104609596s)
--- PASS: TestAddons/Setup (148.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 17.241124ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nq98k" [8572b586-44d6-44d9-8930-69d52e78c935] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007229881s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pgkpv" [51d7fe9d-91e2-49a6-adeb-4a9f32c4fdb1] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005718214s
addons_test.go:340: (dbg) Run:  kubectl --context addons-174699 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-174699 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-174699 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.872465522s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 ip
2024/02/13 21:59:31 [DEBUG] GET http://192.168.39.71:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-26m2z" [fb88e8a2-b814-4451-802c-d77ce4234459] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00425424s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-174699
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-174699: (5.976888725s)
--- PASS: TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.113164ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-vbmfc" [0b8dda7d-7308-43e2-aad8-dc0aa55d063e] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007710174s
addons_test.go:415: (dbg) Run:  kubectl --context addons-174699 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-174699 addons disable metrics-server --alsologtostderr -v=1: (1.051798104s)
--- PASS: TestAddons/parallel/MetricsServer (6.15s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (21.72s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.526428ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-xmbn9" [8b859860-6572-453a-91cc-40aa67cb3030] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.00521637s
addons_test.go:473: (dbg) Run:  kubectl --context addons-174699 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-174699 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (15.038439486s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (21.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 19.546248ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-174699 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-174699 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [503cd57b-fb83-4e65-8591-95feb2b9de5b] Pending
helpers_test.go:344: "task-pv-pod" [503cd57b-fb83-4e65-8591-95feb2b9de5b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [503cd57b-fb83-4e65-8591-95feb2b9de5b] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.005454917s
addons_test.go:584: (dbg) Run:  kubectl --context addons-174699 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-174699 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-174699 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-174699 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-174699 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-174699 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-174699 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1c8d7d87-a106-4a54-a693-bc3ee809b052] Pending
helpers_test.go:344: "task-pv-pod-restore" [1c8d7d87-a106-4a54-a693-bc3ee809b052] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1c8d7d87-a106-4a54-a693-bc3ee809b052] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004514376s
addons_test.go:626: (dbg) Run:  kubectl --context addons-174699 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-174699 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-174699 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-174699 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.065919344s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-amd64 -p addons-174699 addons disable volumesnapshots --alsologtostderr -v=1: (1.07368283s)
--- PASS: TestAddons/parallel/CSI (62.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-174699 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-174699 --alsologtostderr -v=1: (1.930930196s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-rkfg4" [8022662a-d94b-43a7-9be6-85f04481b008] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-rkfg4" [8022662a-d94b-43a7-9be6-85f04481b008] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-rkfg4" [8022662a-d94b-43a7-9be6-85f04481b008] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004767917s
--- PASS: TestAddons/parallel/Headlamp (15.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-t2w8d" [6b6ca8e5-25be-44e5-a5cd-755bacd2ae35] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004492205s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-174699
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-174699: (1.347810631s)
--- PASS: TestAddons/parallel/CloudSpanner (6.36s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-174699 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-174699 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-174699 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [28c2df01-4a18-435e-af30-d0eab71773fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [28c2df01-4a18-435e-af30-d0eab71773fc] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [28c2df01-4a18-435e-af30-d0eab71773fc] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004386552s
addons_test.go:891: (dbg) Run:  kubectl --context addons-174699 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 ssh "cat /opt/local-path-provisioner/pvc-7889aafc-416e-46ca-a309-ac145c00bb50_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-174699 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-174699 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-174699 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-174699 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.07330882s)
--- PASS: TestAddons/parallel/LocalPath (54.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8dzqr" [792f8293-6c14-461d-8057-a6a3dd4a96a9] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005595054s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-174699
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-6mlsl" [18dc18dd-6b24-4dee-8177-e92f28d5510b] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004166453s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-174699 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-174699 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-174699
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-174699: (1m32.229830206s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-174699
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-174699
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-174699
--- PASS: TestAddons/StoppedEnableDisable (92.54s)

                                                
                                    
x
+
TestCertOptions (97.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-521855 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-521855 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m35.720875134s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-521855 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-521855 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-521855 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-521855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-521855
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-521855: (1.050948824s)
--- PASS: TestCertOptions (97.27s)

                                                
                                    
x
+
TestCertExpiration (321.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-239005 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0213 22:34:15.275355   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-239005 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m52.811099257s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-239005 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-239005 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (27.362396112s)
helpers_test.go:175: Cleaning up "cert-expiration-239005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-239005
E0213 22:39:15.275171   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-239005: (1.353131074s)
--- PASS: TestCertExpiration (321.53s)

                                                
                                    
x
+
TestForceSystemdFlag (128.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-373552 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-373552 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (2m7.227486805s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-373552 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-373552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-373552
--- PASS: TestForceSystemdFlag (128.30s)

                                                
                                    
x
+
TestForceSystemdEnv (50.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-706846 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-706846 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (49.290645258s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-706846 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-706846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-706846
--- PASS: TestForceSystemdEnv (50.27s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.39s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.39s)

                                                
                                    
x
+
TestErrorSpam/setup (48.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-752464 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-752464 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-752464 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-752464 --driver=kvm2  --container-runtime=containerd: (48.820447274s)
--- PASS: TestErrorSpam/setup (48.82s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 stop: (1.323472466s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-752464 --log_dir /tmp/nospam-752464 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18171-8975/.minikube/files/etc/test/nested/copy/16162/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-056895 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0213 22:04:15.274410   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:15.280418   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:15.290696   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:15.311055   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:15.351428   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:15.431830   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:15.592278   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:15.912879   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-056895 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m3.780367416s)
--- PASS: TestFunctional/serial/StartWithProxy (63.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-056895 --alsologtostderr -v=8
E0213 22:04:16.553495   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:17.834390   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:20.395335   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-056895 --alsologtostderr -v=8: (5.950691689s)
functional_test.go:659: soft start took 5.951402537s for "functional-056895" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-056895 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 cache add registry.k8s.io/pause:3.1: (1.340216318s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 cache add registry.k8s.io/pause:3.3: (1.382597366s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cache add registry.k8s.io/pause:latest
E0213 22:04:25.516155   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 cache add registry.k8s.io/pause:latest: (1.267544663s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-056895 /tmp/TestFunctionalserialCacheCmdcacheadd_local3371991885/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cache add minikube-local-cache-test:functional-056895
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 cache add minikube-local-cache-test:functional-056895: (1.326693869s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cache delete minikube-local-cache-test:functional-056895
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-056895
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (238.039841ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 cache reload: (1.176495068s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 kubectl -- --context functional-056895 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-056895 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-056895 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0213 22:04:35.757138   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:04:56.237499   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-056895 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.213461405s)
functional_test.go:757: restart took 50.213567224s for "functional-056895" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (50.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-056895 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 logs: (1.539345682s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 logs --file /tmp/TestFunctionalserialLogsFileCmd3939199349/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 logs --file /tmp/TestFunctionalserialLogsFileCmd3939199349/001/logs.txt: (1.503698285s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-056895 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-056895
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-056895: exit status 115 (294.058324ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.117:32723 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-056895 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 config get cpus: exit status 14 (68.24098ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 config get cpus: exit status 14 (73.41298ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-056895 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-056895 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23596: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-056895 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-056895 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (156.096051ms)

                                                
                                                
-- stdout --
	* [functional-056895] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:05:41.967367   23195 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:05:41.967510   23195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:05:41.967519   23195 out.go:304] Setting ErrFile to fd 2...
	I0213 22:05:41.967523   23195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:05:41.967764   23195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 22:05:41.968432   23195 out.go:298] Setting JSON to false
	I0213 22:05:41.969440   23195 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":2889,"bootTime":1707859053,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 22:05:41.969502   23195 start.go:138] virtualization: kvm guest
	I0213 22:05:41.971646   23195 out.go:177] * [functional-056895] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 22:05:41.973737   23195 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 22:05:41.973700   23195 notify.go:220] Checking for updates...
	I0213 22:05:41.977599   23195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 22:05:41.978994   23195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	I0213 22:05:41.980457   23195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 22:05:41.982149   23195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 22:05:41.983533   23195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 22:05:41.985617   23195 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:05:41.986204   23195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:05:41.986295   23195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:05:42.002836   23195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0213 22:05:42.003418   23195 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:05:42.004022   23195 main.go:141] libmachine: Using API Version  1
	I0213 22:05:42.004044   23195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:05:42.004511   23195 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:05:42.004697   23195 main.go:141] libmachine: (functional-056895) Calling .DriverName
	I0213 22:05:42.004950   23195 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 22:05:42.005324   23195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:05:42.005364   23195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:05:42.022752   23195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44163
	I0213 22:05:42.023117   23195 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:05:42.023680   23195 main.go:141] libmachine: Using API Version  1
	I0213 22:05:42.023708   23195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:05:42.024083   23195 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:05:42.024272   23195 main.go:141] libmachine: (functional-056895) Calling .DriverName
	I0213 22:05:42.060399   23195 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 22:05:42.061983   23195 start.go:298] selected driver: kvm2
	I0213 22:05:42.061997   23195 start.go:902] validating driver "kvm2" against &{Name:functional-056895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-056895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:05:42.062073   23195 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 22:05:42.064365   23195 out.go:177] 
	W0213 22:05:42.065980   23195 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0213 22:05:42.067423   23195 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-056895 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-056895 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-056895 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (155.455214ms)

                                                
                                                
-- stdout --
	* [functional-056895] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:05:41.819104   23149 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:05:41.819236   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:05:41.819248   23149 out.go:304] Setting ErrFile to fd 2...
	I0213 22:05:41.819255   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:05:41.819516   23149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 22:05:41.820001   23149 out.go:298] Setting JSON to false
	I0213 22:05:41.821003   23149 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":2889,"bootTime":1707859053,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 22:05:41.821061   23149 start.go:138] virtualization: kvm guest
	I0213 22:05:41.823599   23149 out.go:177] * [functional-056895] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0213 22:05:41.825662   23149 notify.go:220] Checking for updates...
	I0213 22:05:41.826979   23149 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 22:05:41.828329   23149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 22:05:41.829699   23149 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	I0213 22:05:41.830985   23149 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 22:05:41.832513   23149 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 22:05:41.834688   23149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 22:05:41.836839   23149 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:05:41.837230   23149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:05:41.837277   23149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:05:41.852018   23149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0213 22:05:41.852383   23149 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:05:41.853197   23149 main.go:141] libmachine: Using API Version  1
	I0213 22:05:41.853220   23149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:05:41.853708   23149 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:05:41.853915   23149 main.go:141] libmachine: (functional-056895) Calling .DriverName
	I0213 22:05:41.854156   23149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 22:05:41.854895   23149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:05:41.854951   23149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:05:41.869808   23149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0213 22:05:41.870176   23149 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:05:41.870578   23149 main.go:141] libmachine: Using API Version  1
	I0213 22:05:41.870606   23149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:05:41.870924   23149 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:05:41.871066   23149 main.go:141] libmachine: (functional-056895) Calling .DriverName
	I0213 22:05:41.903659   23149 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0213 22:05:41.905600   23149 start.go:298] selected driver: kvm2
	I0213 22:05:41.905616   23149 start.go:902] validating driver "kvm2" against &{Name:functional-056895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-056895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:05:41.905742   23149 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 22:05:41.907827   23149 out.go:177] 
	W0213 22:05:41.909641   23149 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0213 22:05:41.911192   23149 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-056895 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-056895 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-mqk24" [c883cb3c-e759-46a9-b126-54e697ad4db2] Pending
helpers_test.go:344: "hello-node-connect-55497b8b78-mqk24" [c883cb3c-e759-46a9-b126-54e697ad4db2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-mqk24" [c883cb3c-e759-46a9-b126-54e697ad4db2] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004871218s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.117:30749
functional_test.go:1671: http://192.168.39.117:30749: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-mqk24

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.117:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.117:30749
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [04a6e75b-067c-43e0-9fe8-5ef27a109e50] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004856932s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-056895 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-056895 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-056895 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-056895 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-056895 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ed6d072f-3673-4d96-979b-56207a73f69e] Pending
helpers_test.go:344: "sp-pod" [ed6d072f-3673-4d96-979b-56207a73f69e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ed6d072f-3673-4d96-979b-56207a73f69e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00706866s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-056895 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-056895 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-056895 delete -f testdata/storage-provisioner/pod.yaml: (1.333312366s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-056895 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d2be4c30-ceca-481c-8032-61d99f8b6050] Pending
helpers_test.go:344: "sp-pod" [d2be4c30-ceca-481c-8032-61d99f8b6050] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d2be4c30-ceca-481c-8032-61d99f8b6050] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.00453099s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-056895 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh -n functional-056895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cp functional-056895:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2326151924/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh -n functional-056895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh -n functional-056895 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-056895 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-6pv7s" [2d65ca53-4e55-4a08-97d2-a893c37fcda0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-6pv7s" [2d65ca53-4e55-4a08-97d2-a893c37fcda0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.032324528s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-056895 exec mysql-859648c796-6pv7s -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-056895 exec mysql-859648c796-6pv7s -- mysql -ppassword -e "show databases;": exit status 1 (502.177952ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-056895 exec mysql-859648c796-6pv7s -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-056895 exec mysql-859648c796-6pv7s -- mysql -ppassword -e "show databases;": exit status 1 (333.481532ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-056895 exec mysql-859648c796-6pv7s -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-056895 exec mysql-859648c796-6pv7s -- mysql -ppassword -e "show databases;": exit status 1 (331.768732ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-056895 exec mysql-859648c796-6pv7s -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16162/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo cat /etc/test/nested/copy/16162/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16162.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo cat /etc/ssl/certs/16162.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16162.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo cat /usr/share/ca-certificates/16162.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/161622.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo cat /etc/ssl/certs/161622.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/161622.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo cat /usr/share/ca-certificates/161622.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-056895 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 ssh "sudo systemctl is-active docker": exit status 1 (222.085413ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 ssh "sudo systemctl is-active crio": exit status 1 (227.13796ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-056895 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-056895 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-lkc92" [e3d8f8e9-7147-4573-9f94-f62c21aa5602] Pending
helpers_test.go:344: "hello-node-d7447cc7f-lkc92" [e3d8f8e9-7147-4573-9f94-f62c21aa5602] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-lkc92" [e3d8f8e9-7147-4573-9f94-f62c21aa5602] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.006640447s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "207.198869ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "55.144453ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "213.140322ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "58.558305ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdany-port3604892652/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707861929708965355" to /tmp/TestFunctionalparallelMountCmdany-port3604892652/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707861929708965355" to /tmp/TestFunctionalparallelMountCmdany-port3604892652/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707861929708965355" to /tmp/TestFunctionalparallelMountCmdany-port3604892652/001/test-1707861929708965355
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (201.767563ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 13 22:05 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 13 22:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 13 22:05 test-1707861929708965355
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh cat /mount-9p/test-1707861929708965355
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-056895 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [83d9417a-f105-41e0-9de9-9832aeb5b3a1] Pending
helpers_test.go:344: "busybox-mount" [83d9417a-f105-41e0-9de9-9832aeb5b3a1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [83d9417a-f105-41e0-9de9-9832aeb5b3a1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0213 22:05:37.198695   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [83d9417a-f105-41e0-9de9-9832aeb5b3a1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004591075s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-056895 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdany-port3604892652/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdspecific-port3914786570/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.505539ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdspecific-port3914786570/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 ssh "sudo umount -f /mount-9p": exit status 1 (233.501375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-056895 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdspecific-port3914786570/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdVerifyCleanup893130726/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdVerifyCleanup893130726/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdVerifyCleanup893130726/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T" /mount1: exit status 1 (320.215333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-056895 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdVerifyCleanup893130726/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdVerifyCleanup893130726/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-056895 /tmp/TestFunctionalparallelMountCmdVerifyCleanup893130726/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 service list -o json
functional_test.go:1490: Took "558.332322ms" to run "out/minikube-linux-amd64 -p functional-056895 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.117:32338
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.117:32338
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 version -o=json --components
2024/02/13 22:06:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-056895 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-056895
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-056895
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-056895 image ls --format short --alsologtostderr:
I0213 22:06:08.227757   24199 out.go:291] Setting OutFile to fd 1 ...
I0213 22:06:08.227877   24199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.227882   24199 out.go:304] Setting ErrFile to fd 2...
I0213 22:06:08.227886   24199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.228078   24199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
I0213 22:06:08.228696   24199 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.228792   24199 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.229191   24199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.229239   24199 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.249375   24199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
I0213 22:06:08.249858   24199 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.250453   24199 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.250476   24199 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.250793   24199 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.251001   24199 main.go:141] libmachine: (functional-056895) Calling .GetState
I0213 22:06:08.253057   24199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.253102   24199 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.268181   24199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
I0213 22:06:08.268662   24199 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.269195   24199 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.269217   24199 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.269560   24199 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.269740   24199 main.go:141] libmachine: (functional-056895) Calling .DriverName
I0213 22:06:08.269962   24199 ssh_runner.go:195] Run: systemctl --version
I0213 22:06:08.269991   24199 main.go:141] libmachine: (functional-056895) Calling .GetSSHHostname
I0213 22:06:08.273115   24199 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.273591   24199 main.go:141] libmachine: (functional-056895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:e1:71", ip: ""} in network mk-functional-056895: {Iface:virbr1 ExpiryTime:2024-02-13 23:03:28 +0000 UTC Type:0 Mac:52:54:00:d9:e1:71 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-056895 Clientid:01:52:54:00:d9:e1:71}
I0213 22:06:08.273617   24199 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined IP address 192.168.39.117 and MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.273773   24199 main.go:141] libmachine: (functional-056895) Calling .GetSSHPort
I0213 22:06:08.273956   24199 main.go:141] libmachine: (functional-056895) Calling .GetSSHKeyPath
I0213 22:06:08.274103   24199 main.go:141] libmachine: (functional-056895) Calling .GetSSHUsername
I0213 22:06:08.274229   24199 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/functional-056895/id_rsa Username:docker}
I0213 22:06:08.359237   24199 ssh_runner.go:195] Run: sudo crictl images --output json
I0213 22:06:08.414983   24199 main.go:141] libmachine: Making call to close driver server
I0213 22:06:08.415004   24199 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:08.415297   24199 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:08.415325   24199 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:06:08.415328   24199 main.go:141] libmachine: (functional-056895) DBG | Closing plugin on server side
I0213 22:06:08.415342   24199 main.go:141] libmachine: Making call to close driver server
I0213 22:06:08.415352   24199 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:08.415578   24199 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:08.415593   24199 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:06:08.415607   24199 main.go:141] libmachine: (functional-056895) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-056895 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-056895  | sha256:95d513 | 1.01kB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| gcr.io/google-containers/addon-resizer      | functional-056895  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                     | latest             | sha256:247f7a | 70.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-056895 image ls --format table --alsologtostderr:
I0213 22:06:08.814561   24323 out.go:291] Setting OutFile to fd 1 ...
I0213 22:06:08.814700   24323 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.814711   24323 out.go:304] Setting ErrFile to fd 2...
I0213 22:06:08.814719   24323 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.814917   24323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
I0213 22:06:08.815464   24323 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.815553   24323 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.815909   24323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.815956   24323 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.830765   24323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41917
I0213 22:06:08.831221   24323 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.831754   24323 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.831791   24323 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.832133   24323 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.832316   24323 main.go:141] libmachine: (functional-056895) Calling .GetState
I0213 22:06:08.834266   24323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.834301   24323 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.848388   24323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
I0213 22:06:08.848777   24323 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.849221   24323 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.849245   24323 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.849565   24323 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.849754   24323 main.go:141] libmachine: (functional-056895) Calling .DriverName
I0213 22:06:08.849968   24323 ssh_runner.go:195] Run: systemctl --version
I0213 22:06:08.850001   24323 main.go:141] libmachine: (functional-056895) Calling .GetSSHHostname
I0213 22:06:08.852297   24323 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.852672   24323 main.go:141] libmachine: (functional-056895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:e1:71", ip: ""} in network mk-functional-056895: {Iface:virbr1 ExpiryTime:2024-02-13 23:03:28 +0000 UTC Type:0 Mac:52:54:00:d9:e1:71 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-056895 Clientid:01:52:54:00:d9:e1:71}
I0213 22:06:08.852700   24323 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined IP address 192.168.39.117 and MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.852854   24323 main.go:141] libmachine: (functional-056895) Calling .GetSSHPort
I0213 22:06:08.853000   24323 main.go:141] libmachine: (functional-056895) Calling .GetSSHKeyPath
I0213 22:06:08.853136   24323 main.go:141] libmachine: (functional-056895) Calling .GetSSHUsername
I0213 22:06:08.853248   24323 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/functional-056895/id_rsa Username:docker}
I0213 22:06:08.947235   24323 ssh_runner.go:195] Run: sudo crictl images --output json
I0213 22:06:09.017411   24323 main.go:141] libmachine: Making call to close driver server
I0213 22:06:09.017437   24323 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:09.017730   24323 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:09.017750   24323 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:06:09.017758   24323 main.go:141] libmachine: Making call to close driver server
I0213 22:06:09.017765   24323 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:09.017968   24323 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:09.017986   24323 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:06:09.017985   24323 main.go:141] libmachine: (functional-056895) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-056895 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:95d513107b6e6cba9daa6ad21c1db53e2f384f0fb6232e5ea7193f74a221bcb6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-056895"],"size":"1007"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a2611033
15f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-056895"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-sched
uler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"
sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05","repoDigests":["docker.io/library/nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938"],"repoTags":["docker.io/library/nginx:latest"],"size":"70521558"
},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-056895 image ls --format json --alsologtostderr:
I0213 22:06:08.529395   24258 out.go:291] Setting OutFile to fd 1 ...
I0213 22:06:08.529514   24258 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.529522   24258 out.go:304] Setting ErrFile to fd 2...
I0213 22:06:08.529526   24258 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.529748   24258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
I0213 22:06:08.530326   24258 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.530436   24258 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.530795   24258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.530839   24258 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.546126   24258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
I0213 22:06:08.546490   24258 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.547005   24258 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.547026   24258 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.547363   24258 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.547546   24258 main.go:141] libmachine: (functional-056895) Calling .GetState
I0213 22:06:08.549360   24258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.549403   24258 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.563215   24258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
I0213 22:06:08.563669   24258 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.564190   24258 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.564208   24258 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.564551   24258 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.564782   24258 main.go:141] libmachine: (functional-056895) Calling .DriverName
I0213 22:06:08.565015   24258 ssh_runner.go:195] Run: systemctl --version
I0213 22:06:08.565040   24258 main.go:141] libmachine: (functional-056895) Calling .GetSSHHostname
I0213 22:06:08.568370   24258 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.568797   24258 main.go:141] libmachine: (functional-056895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:e1:71", ip: ""} in network mk-functional-056895: {Iface:virbr1 ExpiryTime:2024-02-13 23:03:28 +0000 UTC Type:0 Mac:52:54:00:d9:e1:71 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-056895 Clientid:01:52:54:00:d9:e1:71}
I0213 22:06:08.568841   24258 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined IP address 192.168.39.117 and MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.569008   24258 main.go:141] libmachine: (functional-056895) Calling .GetSSHPort
I0213 22:06:08.569182   24258 main.go:141] libmachine: (functional-056895) Calling .GetSSHKeyPath
I0213 22:06:08.569343   24258 main.go:141] libmachine: (functional-056895) Calling .GetSSHUsername
I0213 22:06:08.569506   24258 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/functional-056895/id_rsa Username:docker}
I0213 22:06:08.667470   24258 ssh_runner.go:195] Run: sudo crictl images --output json
I0213 22:06:08.751200   24258 main.go:141] libmachine: Making call to close driver server
I0213 22:06:08.751212   24258 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:08.751482   24258 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:08.751508   24258 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:06:08.751516   24258 main.go:141] libmachine: (functional-056895) DBG | Closing plugin on server side
I0213 22:06:08.751525   24258 main.go:141] libmachine: Making call to close driver server
I0213 22:06:08.751537   24258 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:08.751773   24258 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:08.751786   24258 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-056895 image ls --format yaml --alsologtostderr:
- id: sha256:95d513107b6e6cba9daa6ad21c1db53e2f384f0fb6232e5ea7193f74a221bcb6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-056895
size: "1007"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-056895
size: "10823156"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05
repoDigests:
- docker.io/library/nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938
repoTags:
- docker.io/library/nginx:latest
size: "70521558"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-056895 image ls --format yaml --alsologtostderr:
I0213 22:06:08.283241   24211 out.go:291] Setting OutFile to fd 1 ...
I0213 22:06:08.283396   24211 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.283407   24211 out.go:304] Setting ErrFile to fd 2...
I0213 22:06:08.283411   24211 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.283613   24211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
I0213 22:06:08.284202   24211 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.284304   24211 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.284695   24211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.284740   24211 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.299305   24211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
I0213 22:06:08.299671   24211 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.300256   24211 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.300292   24211 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.300637   24211 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.300788   24211 main.go:141] libmachine: (functional-056895) Calling .GetState
I0213 22:06:08.302504   24211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.302551   24211 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.318658   24211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
I0213 22:06:08.319111   24211 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.319515   24211 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.319535   24211 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.319896   24211 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.320108   24211 main.go:141] libmachine: (functional-056895) Calling .DriverName
I0213 22:06:08.320367   24211 ssh_runner.go:195] Run: systemctl --version
I0213 22:06:08.320398   24211 main.go:141] libmachine: (functional-056895) Calling .GetSSHHostname
I0213 22:06:08.323038   24211 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.323438   24211 main.go:141] libmachine: (functional-056895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:e1:71", ip: ""} in network mk-functional-056895: {Iface:virbr1 ExpiryTime:2024-02-13 23:03:28 +0000 UTC Type:0 Mac:52:54:00:d9:e1:71 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-056895 Clientid:01:52:54:00:d9:e1:71}
I0213 22:06:08.323477   24211 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined IP address 192.168.39.117 and MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.323514   24211 main.go:141] libmachine: (functional-056895) Calling .GetSSHPort
I0213 22:06:08.323727   24211 main.go:141] libmachine: (functional-056895) Calling .GetSSHKeyPath
I0213 22:06:08.323893   24211 main.go:141] libmachine: (functional-056895) Calling .GetSSHUsername
I0213 22:06:08.324040   24211 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/functional-056895/id_rsa Username:docker}
I0213 22:06:08.414723   24211 ssh_runner.go:195] Run: sudo crictl images --output json
I0213 22:06:08.467795   24211 main.go:141] libmachine: Making call to close driver server
I0213 22:06:08.467813   24211 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:08.468111   24211 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:08.468150   24211 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:06:08.468166   24211 main.go:141] libmachine: Making call to close driver server
I0213 22:06:08.468179   24211 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:08.468428   24211 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:08.468446   24211 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-056895 ssh pgrep buildkitd: exit status 1 (220.872923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image build -t localhost/my-image:functional-056895 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 image build -t localhost/my-image:functional-056895 testdata/build --alsologtostderr: (2.703231299s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-056895 image build -t localhost/my-image:functional-056895 testdata/build --alsologtostderr:
I0213 22:06:08.696139   24300 out.go:291] Setting OutFile to fd 1 ...
I0213 22:06:08.696357   24300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.696370   24300 out.go:304] Setting ErrFile to fd 2...
I0213 22:06:08.696376   24300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:06:08.696615   24300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
I0213 22:06:08.697293   24300 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.697813   24300 config.go:182] Loaded profile config "functional-056895": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0213 22:06:08.698239   24300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.698291   24300 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.712622   24300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
I0213 22:06:08.713145   24300 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.713761   24300 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.713780   24300 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.714154   24300 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.714403   24300 main.go:141] libmachine: (functional-056895) Calling .GetState
I0213 22:06:08.716495   24300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0213 22:06:08.716539   24300 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:06:08.731284   24300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39343
I0213 22:06:08.731704   24300 main.go:141] libmachine: () Calling .GetVersion
I0213 22:06:08.732248   24300 main.go:141] libmachine: Using API Version  1
I0213 22:06:08.732268   24300 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:06:08.732608   24300 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:06:08.732818   24300 main.go:141] libmachine: (functional-056895) Calling .DriverName
I0213 22:06:08.733022   24300 ssh_runner.go:195] Run: systemctl --version
I0213 22:06:08.733043   24300 main.go:141] libmachine: (functional-056895) Calling .GetSSHHostname
I0213 22:06:08.735938   24300 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.736381   24300 main.go:141] libmachine: (functional-056895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:e1:71", ip: ""} in network mk-functional-056895: {Iface:virbr1 ExpiryTime:2024-02-13 23:03:28 +0000 UTC Type:0 Mac:52:54:00:d9:e1:71 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-056895 Clientid:01:52:54:00:d9:e1:71}
I0213 22:06:08.736406   24300 main.go:141] libmachine: (functional-056895) DBG | domain functional-056895 has defined IP address 192.168.39.117 and MAC address 52:54:00:d9:e1:71 in network mk-functional-056895
I0213 22:06:08.736569   24300 main.go:141] libmachine: (functional-056895) Calling .GetSSHPort
I0213 22:06:08.736720   24300 main.go:141] libmachine: (functional-056895) Calling .GetSSHKeyPath
I0213 22:06:08.736856   24300 main.go:141] libmachine: (functional-056895) Calling .GetSSHUsername
I0213 22:06:08.736974   24300 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/functional-056895/id_rsa Username:docker}
I0213 22:06:08.841107   24300 build_images.go:151] Building image from path: /tmp/build.640770717.tar
I0213 22:06:08.841175   24300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0213 22:06:08.862503   24300 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.640770717.tar
I0213 22:06:08.875389   24300 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.640770717.tar: stat -c "%s %y" /var/lib/minikube/build/build.640770717.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.640770717.tar': No such file or directory
I0213 22:06:08.875425   24300 ssh_runner.go:362] scp /tmp/build.640770717.tar --> /var/lib/minikube/build/build.640770717.tar (3072 bytes)
I0213 22:06:08.910995   24300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.640770717
I0213 22:06:08.920202   24300 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.640770717 -xf /var/lib/minikube/build/build.640770717.tar
I0213 22:06:08.928551   24300 containerd.go:379] Building image: /var/lib/minikube/build/build.640770717
I0213 22:06:08.928607   24300 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.640770717 --local dockerfile=/var/lib/minikube/build/build.640770717 --output type=image,name=localhost/my-image:functional-056895
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context:
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:896caa837ecab9e54b88cc3fb8521798117dfbef37ecb49566a7f59ac3065499 0.0s done
#8 exporting config sha256:a6e15b91921f4d730459678f5c89f3458f3fdd378f525523e80bf035d8a86a9c 0.0s done
#8 naming to localhost/my-image:functional-056895 done
#8 DONE 0.2s
I0213 22:06:11.308003   24300 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.640770717 --local dockerfile=/var/lib/minikube/build/build.640770717 --output type=image,name=localhost/my-image:functional-056895: (2.379368555s)
I0213 22:06:11.308073   24300 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.640770717
I0213 22:06:11.327691   24300 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.640770717.tar
I0213 22:06:11.339035   24300 build_images.go:207] Built localhost/my-image:functional-056895 from /tmp/build.640770717.tar
I0213 22:06:11.339088   24300 build_images.go:123] succeeded building to: functional-056895
I0213 22:06:11.339094   24300 build_images.go:124] failed building to: 
I0213 22:06:11.339116   24300 main.go:141] libmachine: Making call to close driver server
I0213 22:06:11.339127   24300 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:11.339424   24300 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:11.339446   24300 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:06:11.339455   24300 main.go:141] libmachine: (functional-056895) DBG | Closing plugin on server side
I0213 22:06:11.339457   24300 main.go:141] libmachine: Making call to close driver server
I0213 22:06:11.339476   24300 main.go:141] libmachine: (functional-056895) Calling .Close
I0213 22:06:11.339689   24300 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:06:11.339706   24300 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-056895
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image load --daemon gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 image load --daemon gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr: (5.127767658s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image load --daemon gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 image load --daemon gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr: (2.840932473s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-056895
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image load --daemon gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 image load --daemon gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr: (5.042950629s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image save gcr.io/google-containers/addon-resizer:functional-056895 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 image save gcr.io/google-containers/addon-resizer:functional-056895 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.307011025s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image rm gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.523220826s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-056895
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-056895 image save --daemon gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-056895 image save --daemon gcr.io/google-containers/addon-resizer:functional-056895 --alsologtostderr: (1.801183867s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-056895
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.84s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-056895
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-056895
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-056895
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (106.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-598836 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0213 22:06:59.119143   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-598836 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m46.268019538s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (106.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-598836 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-598836 addons enable ingress --alsologtostderr -v=5: (11.477230025s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-598836 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-598836 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-598836 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.323189606s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-598836 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-598836 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9c16a8d8-8fd5-44c4-9dd3-6ed3fb6e2876] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9c16a8d8-8fd5-44c4-9dd3-6ed3fb6e2876] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003933344s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-598836 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-598836 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-598836 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.45
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-598836 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-598836 addons disable ingress-dns --alsologtostderr -v=1: (2.671923811s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-598836 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-598836 addons disable ingress --alsologtostderr -v=1: (7.537889458s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-958535 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0213 22:09:15.275128   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:09:42.959675   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-958535 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m8.307864031s)
--- PASS: TestJSONOutput/start/Command (68.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-958535 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-958535 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-958535 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-958535 --output=json --user=testUser: (7.107370658s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-431553 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-431553 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.42622ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0bb8050d-5070-4e34-9edf-3a5f5835cdc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-431553] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad3b5569-83b0-4cc7-bde8-87a68f987e4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18171"}}
	{"specversion":"1.0","id":"620928cb-d0db-496e-806d-e4f8f0f45d48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"077e90be-2f94-4d09-9a43-10261de4cb6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig"}}
	{"specversion":"1.0","id":"85a4e152-ff6f-4cda-844f-56c65e02abf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube"}}
	{"specversion":"1.0","id":"f5f4a9dc-9216-4107-8991-3309815b9db6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2166ebb3-ee30-4f99-97e0-ca077a9df347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a07d9ac3-c147-4f33-b72c-2aab0bc19854","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-431553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-431553
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (98.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-842207 --driver=kvm2  --container-runtime=containerd
E0213 22:10:27.932849   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:27.938110   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:27.948407   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:27.968736   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:28.009067   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:28.089486   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:28.249932   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:28.570553   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:29.211554   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:30.492055   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:33.052524   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:38.173410   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:10:48.414216   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-842207 --driver=kvm2  --container-runtime=containerd: (48.180654727s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-845570 --driver=kvm2  --container-runtime=containerd
E0213 22:11:08.894428   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-845570 --driver=kvm2  --container-runtime=containerd: (48.132592989s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-842207
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-845570
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-845570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-845570
helpers_test.go:175: Cleaning up "first-842207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-842207
--- PASS: TestMinikubeProfile (98.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-448958 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0213 22:11:49.854942   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-448958 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.032932754s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-448958 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-448958 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-462466 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-462466 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.981376077s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462466 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462466 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-448958 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462466 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462466 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.5s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-462466
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-462466: (1.501695085s)
--- PASS: TestMountStart/serial/Stop (1.50s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-462466
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-462466: (22.576964244s)
--- PASS: TestMountStart/serial/RestartStopped (23.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462466 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462466 ssh -- mount | grep 9p
E0213 22:13:11.775418   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-108441 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0213 22:13:13.398582   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:13.403850   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:13.414114   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:13.434422   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:13.474724   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:13.555066   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:13.715517   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:14.036080   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:14.676656   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:15.957243   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:18.518339   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:23.638819   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:33.879184   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:13:54.359460   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:14:15.274720   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:14:35.320298   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-108441 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m50.453772633s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-108441 -- rollout status deployment/busybox: (2.571694845s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-fzbqz -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-kvfph -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-fzbqz -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-kvfph -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-fzbqz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-kvfph -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-fzbqz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-fzbqz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-kvfph -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-108441 -- exec busybox-5b5d89c9d6-kvfph -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-108441 -v 3 --alsologtostderr
E0213 22:15:27.933587   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-108441 -v 3 --alsologtostderr: (43.100659244s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-108441 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp testdata/cp-test.txt multinode-108441:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1805915647/001/cp-test_multinode-108441.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441:/home/docker/cp-test.txt multinode-108441-m02:/home/docker/cp-test_multinode-108441_multinode-108441-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m02 "sudo cat /home/docker/cp-test_multinode-108441_multinode-108441-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441:/home/docker/cp-test.txt multinode-108441-m03:/home/docker/cp-test_multinode-108441_multinode-108441-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m03 "sudo cat /home/docker/cp-test_multinode-108441_multinode-108441-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp testdata/cp-test.txt multinode-108441-m02:/home/docker/cp-test.txt
E0213 22:15:55.616210   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1805915647/001/cp-test_multinode-108441-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441-m02:/home/docker/cp-test.txt multinode-108441:/home/docker/cp-test_multinode-108441-m02_multinode-108441.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441 "sudo cat /home/docker/cp-test_multinode-108441-m02_multinode-108441.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441-m02:/home/docker/cp-test.txt multinode-108441-m03:/home/docker/cp-test_multinode-108441-m02_multinode-108441-m03.txt
E0213 22:15:57.240750   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m03 "sudo cat /home/docker/cp-test_multinode-108441-m02_multinode-108441-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp testdata/cp-test.txt multinode-108441-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1805915647/001/cp-test_multinode-108441-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441-m03:/home/docker/cp-test.txt multinode-108441:/home/docker/cp-test_multinode-108441-m03_multinode-108441.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441 "sudo cat /home/docker/cp-test_multinode-108441-m03_multinode-108441.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 cp multinode-108441-m03:/home/docker/cp-test.txt multinode-108441-m02:/home/docker/cp-test_multinode-108441-m03_multinode-108441-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 ssh -n multinode-108441-m02 "sudo cat /home/docker/cp-test_multinode-108441-m03_multinode-108441-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-108441 node stop m03: (1.38712134s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-108441 status: exit status 7 (439.770074ms)

                                                
                                                
-- stdout --
	multinode-108441
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-108441-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-108441-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-108441 status --alsologtostderr: exit status 7 (431.726839ms)

                                                
                                                
-- stdout --
	multinode-108441
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-108441-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-108441-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:16:02.145997   30699 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:16:02.146120   30699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:16:02.146129   30699 out.go:304] Setting ErrFile to fd 2...
	I0213 22:16:02.146134   30699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:16:02.146303   30699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 22:16:02.146469   30699 out.go:298] Setting JSON to false
	I0213 22:16:02.146496   30699 mustload.go:65] Loading cluster: multinode-108441
	I0213 22:16:02.146530   30699 notify.go:220] Checking for updates...
	I0213 22:16:02.146891   30699 config.go:182] Loaded profile config "multinode-108441": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:16:02.146903   30699 status.go:255] checking status of multinode-108441 ...
	I0213 22:16:02.147268   30699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:16:02.147335   30699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:16:02.173291   30699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33193
	I0213 22:16:02.173727   30699 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:16:02.174357   30699 main.go:141] libmachine: Using API Version  1
	I0213 22:16:02.174387   30699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:16:02.174706   30699 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:16:02.174924   30699 main.go:141] libmachine: (multinode-108441) Calling .GetState
	I0213 22:16:02.176619   30699 status.go:330] multinode-108441 host status = "Running" (err=<nil>)
	I0213 22:16:02.176636   30699 host.go:66] Checking if "multinode-108441" exists ...
	I0213 22:16:02.176963   30699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:16:02.177011   30699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:16:02.193211   30699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0213 22:16:02.193630   30699 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:16:02.194048   30699 main.go:141] libmachine: Using API Version  1
	I0213 22:16:02.194069   30699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:16:02.194376   30699 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:16:02.194557   30699 main.go:141] libmachine: (multinode-108441) Calling .GetIP
	I0213 22:16:02.197233   30699 main.go:141] libmachine: (multinode-108441) DBG | domain multinode-108441 has defined MAC address 52:54:00:56:9d:a1 in network mk-multinode-108441
	I0213 22:16:02.197602   30699 main.go:141] libmachine: (multinode-108441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:9d:a1", ip: ""} in network mk-multinode-108441: {Iface:virbr1 ExpiryTime:2024-02-13 23:13:28 +0000 UTC Type:0 Mac:52:54:00:56:9d:a1 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-108441 Clientid:01:52:54:00:56:9d:a1}
	I0213 22:16:02.197633   30699 main.go:141] libmachine: (multinode-108441) DBG | domain multinode-108441 has defined IP address 192.168.39.9 and MAC address 52:54:00:56:9d:a1 in network mk-multinode-108441
	I0213 22:16:02.197758   30699 host.go:66] Checking if "multinode-108441" exists ...
	I0213 22:16:02.198176   30699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:16:02.198247   30699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:16:02.213064   30699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
	I0213 22:16:02.213574   30699 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:16:02.214032   30699 main.go:141] libmachine: Using API Version  1
	I0213 22:16:02.214051   30699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:16:02.214329   30699 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:16:02.214506   30699 main.go:141] libmachine: (multinode-108441) Calling .DriverName
	I0213 22:16:02.214687   30699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 22:16:02.214722   30699 main.go:141] libmachine: (multinode-108441) Calling .GetSSHHostname
	I0213 22:16:02.217418   30699 main.go:141] libmachine: (multinode-108441) DBG | domain multinode-108441 has defined MAC address 52:54:00:56:9d:a1 in network mk-multinode-108441
	I0213 22:16:02.217839   30699 main.go:141] libmachine: (multinode-108441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:9d:a1", ip: ""} in network mk-multinode-108441: {Iface:virbr1 ExpiryTime:2024-02-13 23:13:28 +0000 UTC Type:0 Mac:52:54:00:56:9d:a1 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-108441 Clientid:01:52:54:00:56:9d:a1}
	I0213 22:16:02.217861   30699 main.go:141] libmachine: (multinode-108441) DBG | domain multinode-108441 has defined IP address 192.168.39.9 and MAC address 52:54:00:56:9d:a1 in network mk-multinode-108441
	I0213 22:16:02.218018   30699 main.go:141] libmachine: (multinode-108441) Calling .GetSSHPort
	I0213 22:16:02.218272   30699 main.go:141] libmachine: (multinode-108441) Calling .GetSSHKeyPath
	I0213 22:16:02.218421   30699 main.go:141] libmachine: (multinode-108441) Calling .GetSSHUsername
	I0213 22:16:02.218609   30699 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/multinode-108441/id_rsa Username:docker}
	I0213 22:16:02.303712   30699 ssh_runner.go:195] Run: systemctl --version
	I0213 22:16:02.309342   30699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:16:02.322709   30699 kubeconfig.go:92] found "multinode-108441" server: "https://192.168.39.9:8443"
	I0213 22:16:02.322737   30699 api_server.go:166] Checking apiserver status ...
	I0213 22:16:02.322769   30699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:16:02.333222   30699 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1097/cgroup
	I0213 22:16:02.340844   30699 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod5a6f6da2fcd0f1d1e194c078e0aa3354/b815f7401eb433f437d8ab8a0b105aac3be9ff37f0aaacb4ffa34f4cce88d756"
	I0213 22:16:02.340904   30699 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod5a6f6da2fcd0f1d1e194c078e0aa3354/b815f7401eb433f437d8ab8a0b105aac3be9ff37f0aaacb4ffa34f4cce88d756/freezer.state
	I0213 22:16:02.348896   30699 api_server.go:204] freezer state: "THAWED"
	I0213 22:16:02.348913   30699 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I0213 22:16:02.354087   30699 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I0213 22:16:02.354107   30699 status.go:421] multinode-108441 apiserver status = Running (err=<nil>)
	I0213 22:16:02.354116   30699 status.go:257] multinode-108441 status: &{Name:multinode-108441 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 22:16:02.354132   30699 status.go:255] checking status of multinode-108441-m02 ...
	I0213 22:16:02.354472   30699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:16:02.354495   30699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:16:02.369054   30699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I0213 22:16:02.369451   30699 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:16:02.369922   30699 main.go:141] libmachine: Using API Version  1
	I0213 22:16:02.369947   30699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:16:02.370283   30699 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:16:02.370461   30699 main.go:141] libmachine: (multinode-108441-m02) Calling .GetState
	I0213 22:16:02.371981   30699 status.go:330] multinode-108441-m02 host status = "Running" (err=<nil>)
	I0213 22:16:02.372006   30699 host.go:66] Checking if "multinode-108441-m02" exists ...
	I0213 22:16:02.372265   30699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:16:02.372285   30699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:16:02.386751   30699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I0213 22:16:02.387123   30699 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:16:02.387561   30699 main.go:141] libmachine: Using API Version  1
	I0213 22:16:02.387588   30699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:16:02.387948   30699 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:16:02.388146   30699 main.go:141] libmachine: (multinode-108441-m02) Calling .GetIP
	I0213 22:16:02.390831   30699 main.go:141] libmachine: (multinode-108441-m02) DBG | domain multinode-108441-m02 has defined MAC address 52:54:00:83:3c:6e in network mk-multinode-108441
	I0213 22:16:02.391218   30699 main.go:141] libmachine: (multinode-108441-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3c:6e", ip: ""} in network mk-multinode-108441: {Iface:virbr1 ExpiryTime:2024-02-13 23:14:34 +0000 UTC Type:0 Mac:52:54:00:83:3c:6e Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-108441-m02 Clientid:01:52:54:00:83:3c:6e}
	I0213 22:16:02.391246   30699 main.go:141] libmachine: (multinode-108441-m02) DBG | domain multinode-108441-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:83:3c:6e in network mk-multinode-108441
	I0213 22:16:02.391378   30699 host.go:66] Checking if "multinode-108441-m02" exists ...
	I0213 22:16:02.391682   30699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:16:02.391743   30699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:16:02.405753   30699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44657
	I0213 22:16:02.406176   30699 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:16:02.406674   30699 main.go:141] libmachine: Using API Version  1
	I0213 22:16:02.406696   30699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:16:02.406968   30699 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:16:02.407166   30699 main.go:141] libmachine: (multinode-108441-m02) Calling .DriverName
	I0213 22:16:02.407353   30699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 22:16:02.407390   30699 main.go:141] libmachine: (multinode-108441-m02) Calling .GetSSHHostname
	I0213 22:16:02.410333   30699 main.go:141] libmachine: (multinode-108441-m02) DBG | domain multinode-108441-m02 has defined MAC address 52:54:00:83:3c:6e in network mk-multinode-108441
	I0213 22:16:02.410756   30699 main.go:141] libmachine: (multinode-108441-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3c:6e", ip: ""} in network mk-multinode-108441: {Iface:virbr1 ExpiryTime:2024-02-13 23:14:34 +0000 UTC Type:0 Mac:52:54:00:83:3c:6e Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-108441-m02 Clientid:01:52:54:00:83:3c:6e}
	I0213 22:16:02.410793   30699 main.go:141] libmachine: (multinode-108441-m02) DBG | domain multinode-108441-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:83:3c:6e in network mk-multinode-108441
	I0213 22:16:02.410915   30699 main.go:141] libmachine: (multinode-108441-m02) Calling .GetSSHPort
	I0213 22:16:02.411087   30699 main.go:141] libmachine: (multinode-108441-m02) Calling .GetSSHKeyPath
	I0213 22:16:02.411300   30699 main.go:141] libmachine: (multinode-108441-m02) Calling .GetSSHUsername
	I0213 22:16:02.411465   30699 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8975/.minikube/machines/multinode-108441-m02/id_rsa Username:docker}
	I0213 22:16:02.491868   30699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:16:02.504316   30699 status.go:257] multinode-108441-m02 status: &{Name:multinode-108441-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0213 22:16:02.504366   30699 status.go:255] checking status of multinode-108441-m03 ...
	I0213 22:16:02.504690   30699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:16:02.504719   30699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:16:02.519425   30699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0213 22:16:02.519810   30699 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:16:02.520345   30699 main.go:141] libmachine: Using API Version  1
	I0213 22:16:02.520373   30699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:16:02.520667   30699 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:16:02.520797   30699 main.go:141] libmachine: (multinode-108441-m03) Calling .GetState
	I0213 22:16:02.522680   30699 status.go:330] multinode-108441-m03 host status = "Stopped" (err=<nil>)
	I0213 22:16:02.522695   30699 status.go:343] host is not running, skipping remaining checks
	I0213 22:16:02.522708   30699 status.go:257] multinode-108441-m03 status: &{Name:multinode-108441-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-108441 node start m03 --alsologtostderr: (27.059347019s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-108441
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-108441
E0213 22:18:13.397746   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:18:41.081177   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:19:15.275016   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-108441: (3m4.673516534s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-108441 --wait=true -v=8 --alsologtostderr
E0213 22:20:27.933239   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:20:38.320452   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-108441 --wait=true -v=8 --alsologtostderr: (2m7.597670781s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-108441
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-108441 node delete m03: (1.221968703s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 stop
E0213 22:23:13.397722   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:24:15.275154   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-108441 stop: (3m3.46808337s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-108441 status: exit status 7 (100.851005ms)

                                                
                                                
-- stdout --
	multinode-108441
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-108441-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-108441 status --alsologtostderr: exit status 7 (92.923078ms)

                                                
                                                
-- stdout --
	multinode-108441
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-108441-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:24:48.016948   33242 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:24:48.017179   33242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:24:48.017186   33242 out.go:304] Setting ErrFile to fd 2...
	I0213 22:24:48.017191   33242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:24:48.017390   33242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 22:24:48.017561   33242 out.go:298] Setting JSON to false
	I0213 22:24:48.017590   33242 mustload.go:65] Loading cluster: multinode-108441
	I0213 22:24:48.017620   33242 notify.go:220] Checking for updates...
	I0213 22:24:48.017971   33242 config.go:182] Loaded profile config "multinode-108441": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:24:48.017984   33242 status.go:255] checking status of multinode-108441 ...
	I0213 22:24:48.018356   33242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:24:48.018431   33242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:24:48.035686   33242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I0213 22:24:48.036204   33242 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:24:48.036813   33242 main.go:141] libmachine: Using API Version  1
	I0213 22:24:48.036840   33242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:24:48.037194   33242 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:24:48.037401   33242 main.go:141] libmachine: (multinode-108441) Calling .GetState
	I0213 22:24:48.039022   33242 status.go:330] multinode-108441 host status = "Stopped" (err=<nil>)
	I0213 22:24:48.039043   33242 status.go:343] host is not running, skipping remaining checks
	I0213 22:24:48.039052   33242 status.go:257] multinode-108441 status: &{Name:multinode-108441 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 22:24:48.039090   33242 status.go:255] checking status of multinode-108441-m02 ...
	I0213 22:24:48.039495   33242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0213 22:24:48.039541   33242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:24:48.053965   33242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0213 22:24:48.054308   33242 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:24:48.054761   33242 main.go:141] libmachine: Using API Version  1
	I0213 22:24:48.054788   33242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:24:48.055077   33242 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:24:48.055237   33242 main.go:141] libmachine: (multinode-108441-m02) Calling .GetState
	I0213 22:24:48.056775   33242 status.go:330] multinode-108441-m02 host status = "Stopped" (err=<nil>)
	I0213 22:24:48.056787   33242 status.go:343] host is not running, skipping remaining checks
	I0213 22:24:48.056792   33242 status.go:257] multinode-108441-m02 status: &{Name:multinode-108441-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-108441 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0213 22:25:27.933140   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-108441 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m28.419557506s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-108441 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-108441
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-108441-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-108441-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (77.921993ms)

                                                
                                                
-- stdout --
	* [multinode-108441-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-108441-m02' is duplicated with machine name 'multinode-108441-m02' in profile 'multinode-108441'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-108441-m03 --driver=kvm2  --container-runtime=containerd
E0213 22:26:50.976493   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-108441-m03 --driver=kvm2  --container-runtime=containerd: (46.381794756s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-108441
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-108441: exit status 80 (234.865483ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-108441
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-108441-m03 already exists in multinode-108441-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-108441-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-108441-m03: (1.009414363s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.76s)

                                                
                                    
x
+
TestPreload (238.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-595132 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0213 22:28:13.397768   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-595132 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m21.379763089s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-595132 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-595132
E0213 22:29:15.275505   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:29:36.441739   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-595132: (1m31.502454413s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-595132 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0213 22:30:27.933339   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-595132 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m3.711441015s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-595132 image list
helpers_test.go:175: Cleaning up "test-preload-595132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-595132
--- PASS: TestPreload (238.42s)

                                                
                                    
x
+
TestScheduledStopUnix (119.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-001663 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-001663 --memory=2048 --driver=kvm2  --container-runtime=containerd: (47.73881077s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-001663 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-001663 -n scheduled-stop-001663
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-001663 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-001663 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-001663 -n scheduled-stop-001663
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-001663
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-001663 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-001663
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-001663: exit status 7 (74.849371ms)

                                                
                                                
-- stdout --
	scheduled-stop-001663
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-001663 -n scheduled-stop-001663
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-001663 -n scheduled-stop-001663: exit status 7 (74.920075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-001663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-001663
--- PASS: TestScheduledStopUnix (119.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (160.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1079945341 start -p running-upgrade-349710 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0213 22:35:27.933590   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1079945341 start -p running-upgrade-349710 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m22.646592163s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-349710 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0213 22:37:18.321308   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-349710 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.663799398s)
helpers_test.go:175: Cleaning up "running-upgrade-349710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-349710
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-349710: (1.359686952s)
--- PASS: TestRunningBinaryUpgrade (160.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (177.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-687525 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-687525 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m33.90642032s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-687525
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-687525: (5.118426824s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-687525 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-687525 status --format={{.Host}}: exit status 7 (99.065643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-687525 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-687525 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (41.240489814s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-687525 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-687525 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-687525 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (99.216139ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-687525] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-687525
	    minikube start -p kubernetes-upgrade-687525 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6875252 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-687525 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-687525 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-687525 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (35.47227493s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-687525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-687525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-687525: (1.817560116s)
--- PASS: TestKubernetesUpgrade (177.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-700068 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-700068 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (95.504655ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-700068] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (107.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-700068 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-700068 --driver=kvm2  --container-runtime=containerd: (1m46.999074416s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-700068 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (107.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-452084 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-452084 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (115.641962ms)

                                                
                                                
-- stdout --
	* [false-452084] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:33:07.832661   37103 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:33:07.832802   37103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:33:07.832812   37103 out.go:304] Setting ErrFile to fd 2...
	I0213 22:33:07.832817   37103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:33:07.833038   37103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8975/.minikube/bin
	I0213 22:33:07.833605   37103 out.go:298] Setting JSON to false
	I0213 22:33:07.834487   37103 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":4535,"bootTime":1707859053,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 22:33:07.834545   37103 start.go:138] virtualization: kvm guest
	I0213 22:33:07.836982   37103 out.go:177] * [false-452084] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 22:33:07.838498   37103 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 22:33:07.838524   37103 notify.go:220] Checking for updates...
	I0213 22:33:07.839913   37103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 22:33:07.841321   37103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8975/kubeconfig
	I0213 22:33:07.842661   37103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8975/.minikube
	I0213 22:33:07.844022   37103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 22:33:07.845394   37103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 22:33:07.847081   37103 config.go:182] Loaded profile config "NoKubernetes-700068": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:33:07.847173   37103 config.go:182] Loaded profile config "force-systemd-env-706846": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:33:07.847256   37103 config.go:182] Loaded profile config "offline-containerd-654013": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0213 22:33:07.847355   37103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 22:33:07.887595   37103 out.go:177] * Using the kvm2 driver based on user configuration
	I0213 22:33:07.888989   37103 start.go:298] selected driver: kvm2
	I0213 22:33:07.889000   37103 start.go:902] validating driver "kvm2" against <nil>
	I0213 22:33:07.889009   37103 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 22:33:07.891052   37103 out.go:177] 
	W0213 22:33:07.892457   37103 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0213 22:33:07.893972   37103 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-452084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-452084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-452084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452084"

                                                
                                                
----------------------- debugLogs end: false-452084 [took: 3.054028431s] --------------------------------
helpers_test.go:175: Cleaning up "false-452084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-452084
--- PASS: TestNetworkPlugins/group/false (3.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (82.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-700068 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-700068 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m20.842304956s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-700068 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-700068 status -o json: exit status 2 (257.734107ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-700068","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-700068
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-700068: (1.068127271s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (82.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-700068 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-700068 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (31.958605088s)
--- PASS: TestNoKubernetes/serial/Start (31.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-700068 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-700068 "sudo systemctl is-active --quiet service kubelet": exit status 1 (222.465819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-700068
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-700068: (1.3400261s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-700068 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-700068 --driver=kvm2  --container-runtime=containerd: (41.100034623s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-700068 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-700068 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.455975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1989136790 start -p stopped-upgrade-938920 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1989136790 start -p stopped-upgrade-938920 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (59.785356998s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1989136790 -p stopped-upgrade-938920 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1989136790 -p stopped-upgrade-938920 stop: (2.146669019s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-938920 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-938920 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m27.213164148s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.15s)

                                                
                                    
x
+
TestPause/serial/Start (114.35s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-119858 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
E0213 22:38:13.398144   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-119858 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m54.351519459s)
--- PASS: TestPause/serial/Start (114.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m43.279407295s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (95.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m35.750252725s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (95.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (11.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-119858 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-119858 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (11.109596738s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (11.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-938920
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m36.814054272s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.81s)

                                                
                                    
x
+
TestPause/serial/Pause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-119858 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-119858 --alsologtostderr -v=5: (1.089004204s)
--- PASS: TestPause/serial/Pause (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-119858 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-119858 --output=json --layout=cluster: exit status 2 (346.102034ms)

                                                
                                                
-- stdout --
	{"Name":"pause-119858","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-119858","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-119858 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-119858 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.32s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-119858 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-119858 --alsologtostderr -v=5: (1.322297827s)
--- PASS: TestPause/serial/DeletePaused (1.32s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (97.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0213 22:40:27.932790   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m37.018137527s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (97.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pzwfz" [c490ccc9-0746-49af-932b-5480f803a4ab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005145072s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-452084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-452084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fqxv9" [356f2033-0949-4e02-b998-2f1efb17b3e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fqxv9" [356f2033-0949-4e02-b998-2f1efb17b3e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.006365734s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-452084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-452084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-996x5" [b85aadf3-4f2c-4ef5-82ef-bd889a5d0d18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-996x5" [b85aadf3-4f2c-4ef5-82ef-bd889a5d0d18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006447319s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-452084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-452084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m11.633055833s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (121.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m1.814834358s)
--- PASS: TestNetworkPlugins/group/flannel/Start (121.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-t2cq4" [dc47f372-41e0-40b9-9ccf-5c154c9df036] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006023917s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-452084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-452084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9gn5p" [0489b2f5-1b7e-4ea1-8570-888e00203e31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9gn5p" [0489b2f5-1b7e-4ea1-8570-888e00203e31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004871306s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-452084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-452084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rt5jh" [d25ad9a7-7d95-4d1c-a276-b332c3b72edb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rt5jh" [d25ad9a7-7d95-4d1c-a276-b332c3b72edb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00770487s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-452084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-452084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-452084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m28.491211221s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (180.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-731075 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-731075 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (3m0.196128774s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (180.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-452084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-452084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ktg5c" [8c9d7af1-647c-4c44-8722-c6855655e6d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ktg5c" [8c9d7af1-647c-4c44-8722-c6855655e6d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004909973s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-452084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-660431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0213 22:43:13.397680   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:43:30.977272   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-660431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m43.864863298s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gcps5" [fd52b84f-78ae-4539-872d-93b73a86835a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008020612s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-452084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-452084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7rqx8" [f9cdb912-5c4c-4eb6-86f9-521794107698] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7rqx8" [f9cdb912-5c4c-4eb6-86f9-521794107698] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005691443s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-452084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-452084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v5q4q" [30c853ab-97a9-4585-ac2e-53a8883423af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v5q4q" [30c853ab-97a9-4585-ac2e-53a8883423af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007039863s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-452084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-452084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-452084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)
E0213 22:50:24.152985   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:24.158261   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:24.168846   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:24.189549   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:24.230151   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:24.311069   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:24.471950   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:24.792771   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:25.023184   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:50:25.432957   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:26.713727   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:27.933087   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
E0213 22:50:29.274473   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:34.394865   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:44.635790   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:50:52.031669   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:50:59.118952   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:51:05.116248   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:51:15.977650   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:51:19.715576   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:51:26.801726   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:51:28.517517   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:51:39.972271   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:51:46.076960   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/old-k8s-version-731075/client.crt: no such file or directory
E0213 22:51:54.590571   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:52:07.656618   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:52:22.276167   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (105.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-050609 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-050609 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m45.305921357s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (105.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (84.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-446707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0213 22:44:15.274831   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-446707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m24.464049012s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (84.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-660431 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6bd36272-21c0-45d9-a37d-7df3025ed581] Pending
helpers_test.go:344: "busybox" [6bd36272-21c0-45d9-a37d-7df3025ed581] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6bd36272-21c0-45d9-a37d-7df3025ed581] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.005400673s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-660431 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-660431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-660431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097295574s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-660431 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-660431 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-660431 --alsologtostderr -v=3: (1m31.9215899s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-731075 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3303d741-1539-4175-859b-5806d9bd1684] Pending
helpers_test.go:344: "busybox" [3303d741-1539-4175-859b-5806d9bd1684] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3303d741-1539-4175-859b-5806d9bd1684] Running
E0213 22:45:27.933526   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/functional-056895/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004074337s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-731075 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-731075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-731075 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-731075 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-731075 --alsologtostderr -v=3: (1m31.989527204s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-446707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-446707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.111825986s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-446707 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-446707 --alsologtostderr -v=3: (2.121321557s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-446707 -n newest-cni-446707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-446707 -n newest-cni-446707: exit status 7 (81.886394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-446707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-446707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0213 22:45:52.031645   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:45:52.036955   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:45:52.047230   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:45:52.067535   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:45:52.107911   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:45:52.188673   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:45:52.348839   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-446707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (45.184337404s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-446707 -n newest-cni-446707
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-050609 create -f testdata/busybox.yaml
E0213 22:45:52.669851   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cb8a13b4-8ee9-4abb-960e-0c2443d4fdb2] Pending
E0213 22:45:53.310083   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cb8a13b4-8ee9-4abb-960e-0c2443d4fdb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0213 22:45:54.590382   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cb8a13b4-8ee9-4abb-960e-0c2443d4fdb2] Running
E0213 22:45:57.151367   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:45:59.118246   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:45:59.123576   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:45:59.133872   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:45:59.154199   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:45:59.194543   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:45:59.274898   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:45:59.435700   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:45:59.756315   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:46:00.396635   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004617032s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-050609 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-050609 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0213 22:46:01.677403   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-050609 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.152533156s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-050609 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-050609 --alsologtostderr -v=3
E0213 22:46:02.271594   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:46:04.237613   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:46:09.358391   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:46:12.512225   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:46:16.442858   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:46:19.599538   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-050609 --alsologtostderr -v=3: (1m32.275383911s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-446707 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-446707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-446707 -n newest-cni-446707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-446707 -n newest-cni-446707: exit status 2 (259.800249ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-446707 -n newest-cni-446707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-446707 -n newest-cni-446707: exit status 2 (247.584159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-446707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-446707 -n newest-cni-446707
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-446707 -n newest-cni-446707
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (105.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-411843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0213 22:46:32.993303   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-411843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m45.922788658s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (105.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-660431 -n no-preload-660431
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-660431 -n no-preload-660431: exit status 7 (93.772142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-660431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (359.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-660431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0213 22:46:39.972854   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:39.978133   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:39.988391   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:40.008723   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:40.049049   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:40.080370   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:46:40.129676   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:40.290131   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:40.610431   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:41.251620   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:42.532414   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:45.093281   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:50.213854   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:46:54.591050   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:54.596345   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:54.606644   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:54.626968   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:54.667345   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:54.747845   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:54.908297   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:55.229141   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:55.870007   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:57.150804   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:46:59.711298   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:47:00.454065   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:47:04.832445   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-660431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m58.824678717s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-660431 -n no-preload-660431
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (359.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-731075 -n old-k8s-version-731075
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-731075 -n old-k8s-version-731075: exit status 7 (114.398726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-731075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (179.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-731075 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0213 22:47:13.954143   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:47:15.073457   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:47:20.934630   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:47:21.040953   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-731075 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m58.904096427s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-731075 -n old-k8s-version-731075
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (179.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609: exit status 7 (114.790857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-050609 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-050609 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0213 22:47:35.553840   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:47:41.179386   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:41.184737   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:41.195014   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:41.215367   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:41.255680   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:41.336040   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:41.496516   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:41.817107   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:42.457957   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:43.738418   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:46.299339   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:47:51.420214   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:48:01.660963   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:48:01.895062   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:48:13.398597   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
E0213 22:48:16.514290   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-050609 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m37.589069417s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-411843 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [89cc3cb8-5858-452f-a41f-1345deb1de16] Pending
helpers_test.go:344: "busybox" [89cc3cb8-5858-452f-a41f-1345deb1de16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [89cc3cb8-5858-452f-a41f-1345deb1de16] Running
E0213 22:48:22.141977   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.005040774s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-411843 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-411843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-411843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.301160711s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-411843 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-411843 --alsologtostderr -v=3
E0213 22:48:32.133428   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:32.138697   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:32.148980   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:32.169259   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:32.210154   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:32.290485   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:32.450888   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:32.771084   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:33.411647   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:34.692569   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:35.874458   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/kindnet-452084/client.crt: no such file or directory
E0213 22:48:37.253141   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:42.373932   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:42.961540   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/auto-452084/client.crt: no such file or directory
E0213 22:48:44.675403   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:44.680659   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:44.690938   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:44.711256   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:44.751569   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:44.831920   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:44.992410   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:45.312534   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:45.953473   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:47.234579   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:49.794710   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:48:52.614528   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:48:54.915299   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:49:03.102332   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
E0213 22:49:05.155639   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:49:13.095500   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
E0213 22:49:15.275352   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/addons-174699/client.crt: no such file or directory
E0213 22:49:23.816071   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/calico-452084/client.crt: no such file or directory
E0213 22:49:25.636083   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
E0213 22:49:38.435369   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/custom-flannel-452084/client.crt: no such file or directory
E0213 22:49:54.056603   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-411843 --alsologtostderr -v=3: (1m32.303224103s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411843 -n embed-certs-411843
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411843 -n embed-certs-411843: exit status 7 (83.520561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-411843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (581.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-411843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-411843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (9m40.870540334s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411843 -n embed-certs-411843
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (581.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9ncx4" [e9dc4daa-4d7e-4c23-8ae7-dbe8601ea561] Running
E0213 22:50:06.596930   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/bridge-452084/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004696671s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9ncx4" [e9dc4daa-4d7e-4c23-8ae7-dbe8601ea561] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004934349s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-731075 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-731075 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-731075 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-731075 -n old-k8s-version-731075
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-731075 -n old-k8s-version-731075: exit status 2 (269.100017ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-731075 -n old-k8s-version-731075
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-731075 -n old-k8s-version-731075: exit status 2 (268.811374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-731075 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-731075 -n old-k8s-version-731075
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-731075 -n old-k8s-version-731075
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-249mz" [b1536189-b7ac-430f-bc82-cf4b8db99eeb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-249mz" [b1536189-b7ac-430f-bc82-cf4b8db99eeb] Running
E0213 22:52:41.179033   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/enable-default-cni-452084/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005565901s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-249mz" [b1536189-b7ac-430f-bc82-cf4b8db99eeb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004937317s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-660431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-660431 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-660431 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-660431 -n no-preload-660431
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-660431 -n no-preload-660431: exit status 2 (273.049088ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-660431 -n no-preload-660431
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-660431 -n no-preload-660431: exit status 2 (271.361206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-660431 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-660431 -n no-preload-660431
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-660431 -n no-preload-660431
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8krk8" [32f92db3-04e3-4c95-b87f-54075842ea2d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0213 22:53:13.397813   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8krk8" [32f92db3-04e3-4c95-b87f-54075842ea2d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.00745625s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8krk8" [32f92db3-04e3-4c95-b87f-54075842ea2d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004402024s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-050609 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-050609 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-050609 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609: exit status 2 (262.387341ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609: exit status 2 (270.81283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-050609 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609
E0213 22:53:32.133878   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/flannel-452084/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-050609 -n default-k8s-diff-port-050609
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hvb6q" [67074e06-b0b8-46e7-9e24-f107d97fb29a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005965344s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hvb6q" [67074e06-b0b8-46e7-9e24-f107d97fb29a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004810409s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-411843 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-411843 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-411843 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411843 -n embed-certs-411843
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411843 -n embed-certs-411843: exit status 2 (254.507971ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411843 -n embed-certs-411843
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411843 -n embed-certs-411843: exit status 2 (252.106721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-411843 --alsologtostderr -v=1
E0213 22:59:52.865329   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/no-preload-660431/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411843 -n embed-certs-411843
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411843 -n embed-certs-411843
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                    

Test skip (39/318)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
247 TestNetworkPlugins/group/kubenet 3.3
256 TestNetworkPlugins/group/cilium 3.53
271 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-452084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-452084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-452084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452084"

                                                
                                                
----------------------- debugLogs end: kubenet-452084 [took: 3.147019448s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-452084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-452084
--- SKIP: TestNetworkPlugins/group/kubenet (3.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0213 22:33:13.398289   16162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8975/.minikube/profiles/ingress-addon-legacy-598836/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: cilium-452084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-452084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-452084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-452084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452084"

                                                
                                                
----------------------- debugLogs end: cilium-452084 [took: 3.365556366s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-452084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-452084
--- SKIP: TestNetworkPlugins/group/cilium (3.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-286982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-286982
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard