Test Report: KVM_Linux_containerd 18649

                    
                      7e28b54b3772a78cf87e91422424e940246c9ed2:2024-04-16:34054
                    
                

Test fail (2/333)

Order failed test Duration
39 TestAddons/parallel/Ingress 19.05
44 TestAddons/parallel/CSI 48.34
x
+
TestAddons/parallel/Ingress (19.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-012036 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-012036 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-012036 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [25a9360e-7eb4-41ab-b018-9cd3a574d555] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [25a9360e-7eb4-41ab-b018-9cd3a574d555] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.009156175s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-012036 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.247
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-012036 addons disable ingress-dns --alsologtostderr -v=1: (1.626935196s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-012036 addons disable ingress --alsologtostderr -v=1: exit status 11 (399.016841ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:23:05.608151   14070 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:23:05.608310   14070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:23:05.608322   14070 out.go:304] Setting ErrFile to fd 2...
	I0416 16:23:05.608327   14070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:23:05.608525   14070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:23:05.608786   14070 mustload.go:65] Loading cluster: addons-012036
	I0416 16:23:05.609143   14070 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:23:05.609162   14070 addons.go:597] checking whether the cluster is paused
	I0416 16:23:05.609261   14070 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:23:05.609274   14070 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:23:05.609652   14070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:23:05.609707   14070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:23:05.625607   14070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0416 16:23:05.626239   14070 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:23:05.626941   14070 main.go:141] libmachine: Using API Version  1
	I0416 16:23:05.626971   14070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:23:05.627373   14070 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:23:05.627585   14070 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:23:05.629315   14070 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:23:05.629554   14070 ssh_runner.go:195] Run: systemctl --version
	I0416 16:23:05.629583   14070 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:23:05.632116   14070 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:23:05.632568   14070 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:23:05.632598   14070 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:23:05.632762   14070 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:23:05.633005   14070 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:23:05.633193   14070 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:23:05.633336   14070 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:23:05.738221   14070 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0416 16:23:05.738305   14070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:23:05.861920   14070 cri.go:89] found id: "b599890fb9c034735fb9f5964f815268eb07ef30ac8729517f00fa72d6109696"
	I0416 16:23:05.861947   14070 cri.go:89] found id: "6dcb42bc8b7b8829634f03ba603a3768ca32d9af9abd13a9147ddb3658c72b8f"
	I0416 16:23:05.861954   14070 cri.go:89] found id: "2a1e553761953c4abc10789770687355ba5d2a4b6d770e53e35ebf1b3aa0bb96"
	I0416 16:23:05.861960   14070 cri.go:89] found id: "b6246028475c81eacba55f063c21da9c4c960dd83314f5fd9af2137d2835d32c"
	I0416 16:23:05.861964   14070 cri.go:89] found id: "9ed676fde39246300ab97468d5587ac60c654caf1552c5e83d60f7b7cfe1aef7"
	I0416 16:23:05.861972   14070 cri.go:89] found id: "93b2144a44fd3d16f144ceb35f7e69404f5c020aed2b91f2a1934de6fefc1859"
	I0416 16:23:05.861976   14070 cri.go:89] found id: "704964b5972d3df0f8969e1a7e6b99625e92d3a7f3204a05b89853be082a5271"
	I0416 16:23:05.861979   14070 cri.go:89] found id: "86f1572e10b06d09eea995808a7412c1995eac8c1fc68f274f54c170123178eb"
	I0416 16:23:05.861981   14070 cri.go:89] found id: "3c4ada40b02b1fe4a5c02b82266fcc11273ba93d63b18e6ff891e6880fb25a33"
	I0416 16:23:05.861987   14070 cri.go:89] found id: "c25c9e32964c81cc36e6803199651d670631bedf057463aa4942a120f235791c"
	I0416 16:23:05.861990   14070 cri.go:89] found id: "0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593"
	I0416 16:23:05.861993   14070 cri.go:89] found id: "d754b9971ad2d2f5a7e70ad479abc97438d830807c7537054de9f14cdb834409"
	I0416 16:23:05.861995   14070 cri.go:89] found id: "f7179288f854b31cc4cbdd569bfcd28c058e519f2bf3526e9928a17684729742"
	I0416 16:23:05.861998   14070 cri.go:89] found id: "b656b7633700bf469cfbf1a15cde28b6e1a8cd5e1f762666e40a4eda00022a63"
	I0416 16:23:05.862002   14070 cri.go:89] found id: "24af4e069b22ff8e362e59eeacad22818e447bc78b5e86e5ede0b4994edf7fc7"
	I0416 16:23:05.862004   14070 cri.go:89] found id: "48a1e53b66a23e7a0573e41068f9c5090d8c75c664d2ab30d4d01cf1368f5624"
	I0416 16:23:05.862006   14070 cri.go:89] found id: "085bd521d80e689ee6adf7cb8b640371281a985e7349716003c1f7dc08415dac"
	I0416 16:23:05.862010   14070 cri.go:89] found id: "87ef232e07b969d1694735212110e97ade6960347449a86c2ad23f48f519c049"
	I0416 16:23:05.862012   14070 cri.go:89] found id: ""
	I0416 16:23:05.862050   14070 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0416 16:23:05.939803   14070 main.go:141] libmachine: Making call to close driver server
	I0416 16:23:05.939825   14070 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:23:05.940164   14070 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:23:05.940181   14070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:23:05.940232   14070 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:23:05.942753   14070 out.go:177] 
	W0416 16:23:05.944258   14070 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-16T16:23:05Z" level=error msg="stat /run/containerd/runc/k8s.io/704964b5972d3df0f8969e1a7e6b99625e92d3a7f3204a05b89853be082a5271: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-16T16:23:05Z" level=error msg="stat /run/containerd/runc/k8s.io/704964b5972d3df0f8969e1a7e6b99625e92d3a7f3204a05b89853be082a5271: no such file or directory"
	
	W0416 16:23:05.944287   14070 out.go:239] * 
	* 
	W0416 16:23:05.946497   14070 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 16:23:05.948249   14070 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:313: failed to disable ingress addon. args "out/minikube-linux-amd64 -p addons-012036 addons disable ingress --alsologtostderr -v=1" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-012036 -n addons-012036
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-012036 logs -n 25: (2.360736486s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-220331 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-220331                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-220331                                                                     | download-only-220331 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-253269                                                                     | download-only-253269 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-310063                                                                     | download-only-310063 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-220331                                                                     | download-only-220331 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-437913 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | binary-mirror-437913                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:33293                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-437913                                                                     | binary-mirror-437913 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| addons  | enable dashboard -p                                                                         | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | addons-012036                                                                               |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | addons-012036                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-012036 --wait=true                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:22 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | addons-012036                                                                               |                      |         |                |                     |                     |
	| addons  | addons-012036 addons                                                                        | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-012036 ip                                                                            | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | -p addons-012036                                                                            |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | -p addons-012036                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | addons-012036                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-012036 ssh cat                                                                       | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | /opt/local-path-provisioner/pvc-8f41ec9b-ffc7-4a6a-90f0-74da7d87242a_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ssh     | addons-012036 ssh curl -s                                                                   | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC | 16 Apr 24 16:23 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| ip      | addons-012036 ip                                                                            | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC | 16 Apr 24 16:23 UTC |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC | 16 Apr 24 16:23 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-012036 addons                                                                        | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC |                     |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:59
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:59.527245   11739 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:59.527526   11739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:59.527537   11739 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:59.527542   11739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:59.527741   11739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:19:59.528422   11739 out.go:298] Setting JSON to false
	I0416 16:19:59.529230   11739 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":150,"bootTime":1713284250,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:59.529300   11739 start.go:139] virtualization: kvm guest
	I0416 16:19:59.531725   11739 out.go:177] * [addons-012036] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:19:59.533232   11739 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:19:59.533278   11739 notify.go:220] Checking for updates...
	I0416 16:19:59.536051   11739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:59.537531   11739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:19:59.538814   11739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:19:59.540095   11739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:19:59.541412   11739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:19:59.542804   11739 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:19:59.577853   11739 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 16:19:59.579362   11739 start.go:297] selected driver: kvm2
	I0416 16:19:59.579378   11739 start.go:901] validating driver "kvm2" against <nil>
	I0416 16:19:59.579394   11739 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:19:59.580090   11739 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:59.580188   11739 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3613/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:19:59.596402   11739 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:19:59.596482   11739 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:19:59.596725   11739 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:19:59.596790   11739 cni.go:84] Creating CNI manager for ""
	I0416 16:19:59.596808   11739 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0416 16:19:59.596828   11739 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 16:19:59.596894   11739 start.go:340] cluster config:
	{Name:addons-012036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:19:59.597012   11739 iso.go:125] acquiring lock: {Name:mk70afca65b055481b04a6db2c93574dfae6043a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:59.598884   11739 out.go:177] * Starting "addons-012036" primary control-plane node in "addons-012036" cluster
	I0416 16:19:59.600486   11739 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0416 16:19:59.600531   11739 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3613/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0416 16:19:59.600544   11739 cache.go:56] Caching tarball of preloaded images
	I0416 16:19:59.600631   11739 preload.go:173] Found /home/jenkins/minikube-integration/18649-3613/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:19:59.600643   11739 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0416 16:19:59.600958   11739 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/config.json ...
	I0416 16:19:59.600989   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/config.json: {Name:mk66815558bebc3bd2f023ca5dabf70847044b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:19:59.601152   11739 start.go:360] acquireMachinesLock for addons-012036: {Name:mk2d52a4d04829b055d900e30b1db98f01926bd9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:19:59.601221   11739 start.go:364] duration metric: took 50.948µs to acquireMachinesLock for "addons-012036"
	I0416 16:19:59.601252   11739 start.go:93] Provisioning new machine with config: &{Name:addons-012036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0416 16:19:59.601336   11739 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 16:19:59.603292   11739 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0416 16:19:59.603439   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:19:59.603485   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:19:59.618271   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0416 16:19:59.618682   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:19:59.619245   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:19:59.619272   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:19:59.619700   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:19:59.619899   11739 main.go:141] libmachine: (addons-012036) Calling .GetMachineName
	I0416 16:19:59.620048   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:19:59.620195   11739 start.go:159] libmachine.API.Create for "addons-012036" (driver="kvm2")
	I0416 16:19:59.620227   11739 client.go:168] LocalClient.Create starting
	I0416 16:19:59.620282   11739 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem
	I0416 16:19:59.746334   11739 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/cert.pem
	I0416 16:19:59.880939   11739 main.go:141] libmachine: Running pre-create checks...
	I0416 16:19:59.880967   11739 main.go:141] libmachine: (addons-012036) Calling .PreCreateCheck
	I0416 16:19:59.881527   11739 main.go:141] libmachine: (addons-012036) Calling .GetConfigRaw
	I0416 16:19:59.882013   11739 main.go:141] libmachine: Creating machine...
	I0416 16:19:59.882030   11739 main.go:141] libmachine: (addons-012036) Calling .Create
	I0416 16:19:59.882209   11739 main.go:141] libmachine: (addons-012036) Creating KVM machine...
	I0416 16:19:59.883506   11739 main.go:141] libmachine: (addons-012036) DBG | found existing default KVM network
	I0416 16:19:59.884383   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:19:59.884210   11761 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0416 16:19:59.884429   11739 main.go:141] libmachine: (addons-012036) DBG | created network xml: 
	I0416 16:19:59.884460   11739 main.go:141] libmachine: (addons-012036) DBG | <network>
	I0416 16:19:59.884475   11739 main.go:141] libmachine: (addons-012036) DBG |   <name>mk-addons-012036</name>
	I0416 16:19:59.884486   11739 main.go:141] libmachine: (addons-012036) DBG |   <dns enable='no'/>
	I0416 16:19:59.884492   11739 main.go:141] libmachine: (addons-012036) DBG |   
	I0416 16:19:59.884499   11739 main.go:141] libmachine: (addons-012036) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0416 16:19:59.884507   11739 main.go:141] libmachine: (addons-012036) DBG |     <dhcp>
	I0416 16:19:59.884516   11739 main.go:141] libmachine: (addons-012036) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0416 16:19:59.884528   11739 main.go:141] libmachine: (addons-012036) DBG |     </dhcp>
	I0416 16:19:59.884539   11739 main.go:141] libmachine: (addons-012036) DBG |   </ip>
	I0416 16:19:59.884552   11739 main.go:141] libmachine: (addons-012036) DBG |   
	I0416 16:19:59.884561   11739 main.go:141] libmachine: (addons-012036) DBG | </network>
	I0416 16:19:59.884568   11739 main.go:141] libmachine: (addons-012036) DBG | 
	I0416 16:19:59.890144   11739 main.go:141] libmachine: (addons-012036) DBG | trying to create private KVM network mk-addons-012036 192.168.39.0/24...
	I0416 16:19:59.963308   11739 main.go:141] libmachine: (addons-012036) DBG | private KVM network mk-addons-012036 192.168.39.0/24 created
	I0416 16:19:59.963427   11739 main.go:141] libmachine: (addons-012036) Setting up store path in /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036 ...
	I0416 16:19:59.963450   11739 main.go:141] libmachine: (addons-012036) Building disk image from file:///home/jenkins/minikube-integration/18649-3613/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:19:59.963472   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:19:59.963394   11761 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:19:59.963640   11739 main.go:141] libmachine: (addons-012036) Downloading /home/jenkins/minikube-integration/18649-3613/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3613/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:20:00.200845   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:00.200702   11761 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa...
	I0416 16:20:00.389455   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:00.389312   11761 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/addons-012036.rawdisk...
	I0416 16:20:00.389478   11739 main.go:141] libmachine: (addons-012036) DBG | Writing magic tar header
	I0416 16:20:00.389488   11739 main.go:141] libmachine: (addons-012036) DBG | Writing SSH key tar header
	I0416 16:20:00.389498   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:00.389436   11761 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036 ...
	I0416 16:20:00.389561   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036
	I0416 16:20:00.389584   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3613/.minikube/machines
	I0416 16:20:00.389594   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:20:00.389641   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036 (perms=drwx------)
	I0416 16:20:00.389667   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration/18649-3613/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:20:00.389683   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3613
	I0416 16:20:00.389725   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration/18649-3613/.minikube (perms=drwxr-xr-x)
	I0416 16:20:00.389744   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:20:00.389755   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:20:00.389764   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home
	I0416 16:20:00.389775   11739 main.go:141] libmachine: (addons-012036) DBG | Skipping /home - not owner
	I0416 16:20:00.389817   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration/18649-3613 (perms=drwxrwxr-x)
	I0416 16:20:00.389839   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:20:00.389847   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:20:00.389855   11739 main.go:141] libmachine: (addons-012036) Creating domain...
	I0416 16:20:00.390879   11739 main.go:141] libmachine: (addons-012036) define libvirt domain using xml: 
	I0416 16:20:00.390915   11739 main.go:141] libmachine: (addons-012036) <domain type='kvm'>
	I0416 16:20:00.390926   11739 main.go:141] libmachine: (addons-012036)   <name>addons-012036</name>
	I0416 16:20:00.390935   11739 main.go:141] libmachine: (addons-012036)   <memory unit='MiB'>4000</memory>
	I0416 16:20:00.390949   11739 main.go:141] libmachine: (addons-012036)   <vcpu>2</vcpu>
	I0416 16:20:00.390959   11739 main.go:141] libmachine: (addons-012036)   <features>
	I0416 16:20:00.390968   11739 main.go:141] libmachine: (addons-012036)     <acpi/>
	I0416 16:20:00.390983   11739 main.go:141] libmachine: (addons-012036)     <apic/>
	I0416 16:20:00.390995   11739 main.go:141] libmachine: (addons-012036)     <pae/>
	I0416 16:20:00.391002   11739 main.go:141] libmachine: (addons-012036)     
	I0416 16:20:00.391012   11739 main.go:141] libmachine: (addons-012036)   </features>
	I0416 16:20:00.391017   11739 main.go:141] libmachine: (addons-012036)   <cpu mode='host-passthrough'>
	I0416 16:20:00.391022   11739 main.go:141] libmachine: (addons-012036)   
	I0416 16:20:00.391032   11739 main.go:141] libmachine: (addons-012036)   </cpu>
	I0416 16:20:00.391040   11739 main.go:141] libmachine: (addons-012036)   <os>
	I0416 16:20:00.391045   11739 main.go:141] libmachine: (addons-012036)     <type>hvm</type>
	I0416 16:20:00.391053   11739 main.go:141] libmachine: (addons-012036)     <boot dev='cdrom'/>
	I0416 16:20:00.391058   11739 main.go:141] libmachine: (addons-012036)     <boot dev='hd'/>
	I0416 16:20:00.391067   11739 main.go:141] libmachine: (addons-012036)     <bootmenu enable='no'/>
	I0416 16:20:00.391072   11739 main.go:141] libmachine: (addons-012036)   </os>
	I0416 16:20:00.391080   11739 main.go:141] libmachine: (addons-012036)   <devices>
	I0416 16:20:00.391090   11739 main.go:141] libmachine: (addons-012036)     <disk type='file' device='cdrom'>
	I0416 16:20:00.391100   11739 main.go:141] libmachine: (addons-012036)       <source file='/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/boot2docker.iso'/>
	I0416 16:20:00.391112   11739 main.go:141] libmachine: (addons-012036)       <target dev='hdc' bus='scsi'/>
	I0416 16:20:00.391121   11739 main.go:141] libmachine: (addons-012036)       <readonly/>
	I0416 16:20:00.391142   11739 main.go:141] libmachine: (addons-012036)     </disk>
	I0416 16:20:00.391155   11739 main.go:141] libmachine: (addons-012036)     <disk type='file' device='disk'>
	I0416 16:20:00.391164   11739 main.go:141] libmachine: (addons-012036)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:20:00.391180   11739 main.go:141] libmachine: (addons-012036)       <source file='/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/addons-012036.rawdisk'/>
	I0416 16:20:00.391192   11739 main.go:141] libmachine: (addons-012036)       <target dev='hda' bus='virtio'/>
	I0416 16:20:00.391204   11739 main.go:141] libmachine: (addons-012036)     </disk>
	I0416 16:20:00.391214   11739 main.go:141] libmachine: (addons-012036)     <interface type='network'>
	I0416 16:20:00.391226   11739 main.go:141] libmachine: (addons-012036)       <source network='mk-addons-012036'/>
	I0416 16:20:00.391234   11739 main.go:141] libmachine: (addons-012036)       <model type='virtio'/>
	I0416 16:20:00.391243   11739 main.go:141] libmachine: (addons-012036)     </interface>
	I0416 16:20:00.391248   11739 main.go:141] libmachine: (addons-012036)     <interface type='network'>
	I0416 16:20:00.391255   11739 main.go:141] libmachine: (addons-012036)       <source network='default'/>
	I0416 16:20:00.391260   11739 main.go:141] libmachine: (addons-012036)       <model type='virtio'/>
	I0416 16:20:00.391268   11739 main.go:141] libmachine: (addons-012036)     </interface>
	I0416 16:20:00.391272   11739 main.go:141] libmachine: (addons-012036)     <serial type='pty'>
	I0416 16:20:00.391280   11739 main.go:141] libmachine: (addons-012036)       <target port='0'/>
	I0416 16:20:00.391287   11739 main.go:141] libmachine: (addons-012036)     </serial>
	I0416 16:20:00.391293   11739 main.go:141] libmachine: (addons-012036)     <console type='pty'>
	I0416 16:20:00.391300   11739 main.go:141] libmachine: (addons-012036)       <target type='serial' port='0'/>
	I0416 16:20:00.391332   11739 main.go:141] libmachine: (addons-012036)     </console>
	I0416 16:20:00.391358   11739 main.go:141] libmachine: (addons-012036)     <rng model='virtio'>
	I0416 16:20:00.391374   11739 main.go:141] libmachine: (addons-012036)       <backend model='random'>/dev/random</backend>
	I0416 16:20:00.391385   11739 main.go:141] libmachine: (addons-012036)     </rng>
	I0416 16:20:00.391396   11739 main.go:141] libmachine: (addons-012036)     
	I0416 16:20:00.391406   11739 main.go:141] libmachine: (addons-012036)     
	I0416 16:20:00.391417   11739 main.go:141] libmachine: (addons-012036)   </devices>
	I0416 16:20:00.391429   11739 main.go:141] libmachine: (addons-012036) </domain>
	I0416 16:20:00.391450   11739 main.go:141] libmachine: (addons-012036) 
	I0416 16:20:00.397797   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:81:9e:a9 in network default
	I0416 16:20:00.398420   11739 main.go:141] libmachine: (addons-012036) Ensuring networks are active...
	I0416 16:20:00.398445   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:00.399114   11739 main.go:141] libmachine: (addons-012036) Ensuring network default is active
	I0416 16:20:00.399434   11739 main.go:141] libmachine: (addons-012036) Ensuring network mk-addons-012036 is active
	I0416 16:20:00.399959   11739 main.go:141] libmachine: (addons-012036) Getting domain xml...
	I0416 16:20:00.400693   11739 main.go:141] libmachine: (addons-012036) Creating domain...
	I0416 16:20:01.847412   11739 main.go:141] libmachine: (addons-012036) Waiting to get IP...
	I0416 16:20:01.848294   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:01.848677   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:01.848720   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:01.848670   11761 retry.go:31] will retry after 255.945162ms: waiting for machine to come up
	I0416 16:20:02.106284   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:02.106746   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:02.106774   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:02.106706   11761 retry.go:31] will retry after 366.834761ms: waiting for machine to come up
	I0416 16:20:02.475444   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:02.475859   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:02.475880   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:02.475838   11761 retry.go:31] will retry after 386.130051ms: waiting for machine to come up
	I0416 16:20:02.863399   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:02.863861   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:02.863888   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:02.863802   11761 retry.go:31] will retry after 584.84142ms: waiting for machine to come up
	I0416 16:20:03.450767   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:03.451243   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:03.451268   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:03.451202   11761 retry.go:31] will retry after 716.748039ms: waiting for machine to come up
	I0416 16:20:04.169306   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:04.169703   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:04.169748   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:04.169668   11761 retry.go:31] will retry after 844.438849ms: waiting for machine to come up
	I0416 16:20:05.015229   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:05.015609   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:05.015631   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:05.015569   11761 retry.go:31] will retry after 723.980814ms: waiting for machine to come up
	I0416 16:20:05.741666   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:05.741988   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:05.742010   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:05.741960   11761 retry.go:31] will retry after 1.348041583s: waiting for machine to come up
	I0416 16:20:07.092468   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:07.092906   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:07.092923   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:07.092861   11761 retry.go:31] will retry after 1.612633285s: waiting for machine to come up
	I0416 16:20:08.707805   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:08.708256   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:08.708286   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:08.708210   11761 retry.go:31] will retry after 2.090027603s: waiting for machine to come up
	I0416 16:20:10.799583   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:10.800038   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:10.800062   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:10.800000   11761 retry.go:31] will retry after 2.137796384s: waiting for machine to come up
	I0416 16:20:12.938896   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:12.939255   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:12.939290   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:12.939219   11761 retry.go:31] will retry after 3.492845465s: waiting for machine to come up
	I0416 16:20:16.434224   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:16.434793   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:16.434816   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:16.434731   11761 retry.go:31] will retry after 4.261651129s: waiting for machine to come up
	I0416 16:20:20.697906   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:20.698385   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:20.698423   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:20.698361   11761 retry.go:31] will retry after 3.86830584s: waiting for machine to come up
	I0416 16:20:24.571593   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:24.572110   11739 main.go:141] libmachine: (addons-012036) Found IP for machine: 192.168.39.247
	I0416 16:20:24.572133   11739 main.go:141] libmachine: (addons-012036) Reserving static IP address...
	I0416 16:20:24.572150   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has current primary IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:24.572489   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find host DHCP lease matching {name: "addons-012036", mac: "52:54:00:dd:cf:c9", ip: "192.168.39.247"} in network mk-addons-012036
	I0416 16:20:24.652648   11739 main.go:141] libmachine: (addons-012036) DBG | Getting to WaitForSSH function...
	I0416 16:20:24.652688   11739 main.go:141] libmachine: (addons-012036) Reserved static IP address: 192.168.39.247
	I0416 16:20:24.652700   11739 main.go:141] libmachine: (addons-012036) Waiting for SSH to be available...
	I0416 16:20:24.655288   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:24.655611   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036
	I0416 16:20:24.655633   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find defined IP address of network mk-addons-012036 interface with MAC address 52:54:00:dd:cf:c9
	I0416 16:20:24.655860   11739 main.go:141] libmachine: (addons-012036) DBG | Using SSH client type: external
	I0416 16:20:24.655883   11739 main.go:141] libmachine: (addons-012036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa (-rw-------)
	I0416 16:20:24.655905   11739 main.go:141] libmachine: (addons-012036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:20:24.655917   11739 main.go:141] libmachine: (addons-012036) DBG | About to run SSH command:
	I0416 16:20:24.655927   11739 main.go:141] libmachine: (addons-012036) DBG | exit 0
	I0416 16:20:24.667733   11739 main.go:141] libmachine: (addons-012036) DBG | SSH cmd err, output: exit status 255: 
	I0416 16:20:24.667760   11739 main.go:141] libmachine: (addons-012036) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0416 16:20:24.667771   11739 main.go:141] libmachine: (addons-012036) DBG | command : exit 0
	I0416 16:20:24.667779   11739 main.go:141] libmachine: (addons-012036) DBG | err     : exit status 255
	I0416 16:20:24.667790   11739 main.go:141] libmachine: (addons-012036) DBG | output  : 
	I0416 16:20:27.668005   11739 main.go:141] libmachine: (addons-012036) DBG | Getting to WaitForSSH function...
	I0416 16:20:27.670351   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.670717   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:27.670755   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.670884   11739 main.go:141] libmachine: (addons-012036) DBG | Using SSH client type: external
	I0416 16:20:27.670909   11739 main.go:141] libmachine: (addons-012036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa (-rw-------)
	I0416 16:20:27.670932   11739 main.go:141] libmachine: (addons-012036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:20:27.670949   11739 main.go:141] libmachine: (addons-012036) DBG | About to run SSH command:
	I0416 16:20:27.670964   11739 main.go:141] libmachine: (addons-012036) DBG | exit 0
	I0416 16:20:27.796051   11739 main.go:141] libmachine: (addons-012036) DBG | SSH cmd err, output: <nil>: 
	I0416 16:20:27.796440   11739 main.go:141] libmachine: (addons-012036) KVM machine creation complete!
	I0416 16:20:27.796681   11739 main.go:141] libmachine: (addons-012036) Calling .GetConfigRaw
	I0416 16:20:27.797276   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:27.797482   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:27.797625   11739 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:20:27.797641   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:20:27.798941   11739 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:20:27.798960   11739 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:20:27.798968   11739 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:20:27.798974   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:27.801332   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.801653   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:27.801684   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.801790   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:27.801998   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:27.802155   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:27.802298   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:27.802479   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:27.802642   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:27.802653   11739 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:20:27.906954   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:20:27.906975   11739 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:20:27.906983   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:27.909604   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.909994   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:27.910027   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.910142   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:27.910350   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:27.910512   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:27.910621   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:27.910817   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:27.910998   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:27.911012   11739 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:20:28.016880   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:20:28.016975   11739 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:20:28.016990   11739 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:20:28.017003   11739 main.go:141] libmachine: (addons-012036) Calling .GetMachineName
	I0416 16:20:28.017269   11739 buildroot.go:166] provisioning hostname "addons-012036"
	I0416 16:20:28.017309   11739 main.go:141] libmachine: (addons-012036) Calling .GetMachineName
	I0416 16:20:28.017545   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.020128   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.020472   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.020524   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.020733   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.020909   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.021065   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.021215   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.021381   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:28.021554   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:28.021567   11739 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-012036 && echo "addons-012036" | sudo tee /etc/hostname
	I0416 16:20:28.141999   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-012036
	
	I0416 16:20:28.142028   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.144672   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.144992   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.145019   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.145218   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.145439   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.145631   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.145788   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.145968   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:28.146137   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:28.146153   11739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-012036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-012036/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-012036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:20:28.262924   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:20:28.262961   11739 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3613/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3613/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3613/.minikube}
	I0416 16:20:28.262989   11739 buildroot.go:174] setting up certificates
	I0416 16:20:28.263000   11739 provision.go:84] configureAuth start
	I0416 16:20:28.263013   11739 main.go:141] libmachine: (addons-012036) Calling .GetMachineName
	I0416 16:20:28.263343   11739 main.go:141] libmachine: (addons-012036) Calling .GetIP
	I0416 16:20:28.265966   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.266290   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.266312   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.266482   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.268651   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.269020   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.269040   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.269172   11739 provision.go:143] copyHostCerts
	I0416 16:20:28.269254   11739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3613/.minikube/ca.pem (1078 bytes)
	I0416 16:20:28.269414   11739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3613/.minikube/cert.pem (1123 bytes)
	I0416 16:20:28.269513   11739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3613/.minikube/key.pem (1675 bytes)
	I0416 16:20:28.269598   11739 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca-key.pem org=jenkins.addons-012036 san=[127.0.0.1 192.168.39.247 addons-012036 localhost minikube]
	I0416 16:20:28.404570   11739 provision.go:177] copyRemoteCerts
	I0416 16:20:28.404627   11739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:20:28.404653   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.407562   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.407893   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.407924   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.408099   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.408337   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.408478   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.408654   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:20:28.499351   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:20:28.532783   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:20:28.577530   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:20:28.607574   11739 provision.go:87] duration metric: took 344.562659ms to configureAuth
	I0416 16:20:28.607610   11739 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:20:28.607841   11739 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:20:28.607871   11739 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:20:28.607883   11739 main.go:141] libmachine: (addons-012036) Calling .GetURL
	I0416 16:20:28.609116   11739 main.go:141] libmachine: (addons-012036) DBG | Using libvirt version 6000000
	I0416 16:20:28.611263   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.611676   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.611695   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.611946   11739 main.go:141] libmachine: Docker is up and running!
	I0416 16:20:28.611965   11739 main.go:141] libmachine: Reticulating splines...
	I0416 16:20:28.611974   11739 client.go:171] duration metric: took 28.991735116s to LocalClient.Create
	I0416 16:20:28.611999   11739 start.go:167] duration metric: took 28.991802959s to libmachine.API.Create "addons-012036"
	I0416 16:20:28.612011   11739 start.go:293] postStartSetup for "addons-012036" (driver="kvm2")
	I0416 16:20:28.612025   11739 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:20:28.612062   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.612310   11739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:20:28.612333   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.614770   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.615233   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.615261   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.615443   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.615671   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.615854   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.615998   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:20:28.699360   11739 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:20:28.705192   11739 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:20:28.705232   11739 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3613/.minikube/addons for local assets ...
	I0416 16:20:28.705296   11739 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3613/.minikube/files for local assets ...
	I0416 16:20:28.705319   11739 start.go:296] duration metric: took 93.301134ms for postStartSetup
	I0416 16:20:28.705350   11739 main.go:141] libmachine: (addons-012036) Calling .GetConfigRaw
	I0416 16:20:28.743430   11739 main.go:141] libmachine: (addons-012036) Calling .GetIP
	I0416 16:20:28.746342   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.746748   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.746805   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.747082   11739 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/config.json ...
	I0416 16:20:28.808163   11739 start.go:128] duration metric: took 29.206809207s to createHost
	I0416 16:20:28.808226   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.811324   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.811724   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.811762   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.812067   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.812305   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.812504   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.812673   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.812847   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:28.813017   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:28.813029   11739 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:20:28.921255   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713284428.908939594
	
	I0416 16:20:28.921285   11739 fix.go:216] guest clock: 1713284428.908939594
	I0416 16:20:28.921295   11739 fix.go:229] Guest: 2024-04-16 16:20:28.908939594 +0000 UTC Remote: 2024-04-16 16:20:28.80818957 +0000 UTC m=+29.328031426 (delta=100.750024ms)
	I0416 16:20:28.921333   11739 fix.go:200] guest clock delta is within tolerance: 100.750024ms
	I0416 16:20:28.921342   11739 start.go:83] releasing machines lock for "addons-012036", held for 29.320107375s
	I0416 16:20:28.921377   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.921687   11739 main.go:141] libmachine: (addons-012036) Calling .GetIP
	I0416 16:20:28.924400   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.924761   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.924793   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.924934   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.925582   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.925788   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.925904   11739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:20:28.925945   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.926016   11739 ssh_runner.go:195] Run: cat /version.json
	I0416 16:20:28.926040   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.928769   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.929052   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.929086   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.929107   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.929300   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.929488   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.929543   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.929570   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.929694   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.929733   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.929875   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.929879   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:20:28.930032   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.930184   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:20:29.009394   11739 ssh_runner.go:195] Run: systemctl --version
	I0416 16:20:29.040153   11739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:20:29.047305   11739 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:20:29.047387   11739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:20:29.067100   11739 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:20:29.067133   11739 start.go:494] detecting cgroup driver to use...
	I0416 16:20:29.067241   11739 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:20:29.311439   11739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:20:29.326715   11739 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:20:29.326788   11739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:20:29.342653   11739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:20:29.358843   11739 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:20:29.489765   11739 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:20:29.658463   11739 docker.go:233] disabling docker service ...
	I0416 16:20:29.658529   11739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:20:29.676244   11739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:20:29.692845   11739 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:20:29.820900   11739 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:20:29.967437   11739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:20:29.983812   11739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:20:30.006954   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:20:30.020149   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:20:30.033236   11739 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:20:30.033303   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:20:30.046262   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:20:30.059317   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:20:30.072189   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:20:30.085125   11739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:20:30.099112   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:20:30.112098   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:20:30.124845   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:20:30.138222   11739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:20:30.149785   11739 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:20:30.149847   11739 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:20:30.165569   11739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:20:30.177951   11739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:30.326028   11739 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:20:30.360985   11739 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0416 16:20:30.361081   11739 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0416 16:20:30.366492   11739 retry.go:31] will retry after 646.519722ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0416 16:20:31.013371   11739 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0416 16:20:31.019724   11739 start.go:562] Will wait 60s for crictl version
	I0416 16:20:31.019805   11739 ssh_runner.go:195] Run: which crictl
	I0416 16:20:31.024787   11739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:20:31.062124   11739 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0416 16:20:31.062244   11739 ssh_runner.go:195] Run: containerd --version
	I0416 16:20:31.092252   11739 ssh_runner.go:195] Run: containerd --version
	I0416 16:20:31.127029   11739 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.7.15 ...
	I0416 16:20:31.128692   11739 main.go:141] libmachine: (addons-012036) Calling .GetIP
	I0416 16:20:31.131466   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:31.131752   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:31.131792   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:31.132079   11739 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:20:31.137162   11739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:20:31.152111   11739 kubeadm.go:877] updating cluster {Name:addons-012036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:20:31.152209   11739 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0416 16:20:31.152279   11739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:20:31.190277   11739 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 16:20:31.190343   11739 ssh_runner.go:195] Run: which lz4
	I0416 16:20:31.195339   11739 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:20:31.200495   11739 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:20:31.200538   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (402346652 bytes)
	I0416 16:20:32.892543   11739 containerd.go:563] duration metric: took 1.697230091s to copy over tarball
	I0416 16:20:32.892626   11739 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:20:35.763378   11739 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.870714673s)
	I0416 16:20:35.763412   11739 containerd.go:570] duration metric: took 2.870838698s to extract the tarball
	I0416 16:20:35.763419   11739 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:20:35.805896   11739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:35.941248   11739 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:20:35.968495   11739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:20:36.021519   11739 retry.go:31] will retry after 312.428405ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-16T16:20:36Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0416 16:20:36.335218   11739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:20:36.380284   11739 containerd.go:627] all images are preloaded for containerd runtime.
	I0416 16:20:36.380308   11739 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:20:36.380315   11739 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.29.3 containerd true true} ...
	I0416 16:20:36.380419   11739 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-012036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:20:36.380470   11739 ssh_runner.go:195] Run: sudo crictl info
	I0416 16:20:36.420828   11739 cni.go:84] Creating CNI manager for ""
	I0416 16:20:36.420856   11739 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0416 16:20:36.420866   11739 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:20:36.420885   11739 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-012036 NodeName:addons-012036 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:20:36.420997   11739 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-012036"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:20:36.421054   11739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:20:36.433902   11739 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:20:36.433965   11739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 16:20:36.446698   11739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0416 16:20:36.467910   11739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:20:36.489267   11739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0416 16:20:36.512723   11739 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0416 16:20:36.517599   11739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:20:36.533345   11739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:36.667064   11739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:20:36.692948   11739 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036 for IP: 192.168.39.247
	I0416 16:20:36.692973   11739 certs.go:194] generating shared ca certs ...
	I0416 16:20:36.693008   11739 certs.go:226] acquiring lock for ca certs: {Name:mk9ced23d0481cc75aea9804ec6a597cc9021aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:36.693149   11739 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3613/.minikube/ca.key
	I0416 16:20:36.747986   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt ...
	I0416 16:20:36.748026   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt: {Name:mkac58f778aaf55d4b88bed00622c014e0c9b3b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:36.748227   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/ca.key ...
	I0416 16:20:36.748243   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/ca.key: {Name:mk0dde4dace016394ebca3966c4697c488b041ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:36.748361   11739 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.key
	I0416 16:20:37.086132   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.crt ...
	I0416 16:20:37.086161   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.crt: {Name:mk525e75f6f10a02af5bebafaf0f8ccd3eb9b5df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.086325   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.key ...
	I0416 16:20:37.086337   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.key: {Name:mk568b12fa31440e2141c5fc8fb8f5ca63d07af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.086400   11739 certs.go:256] generating profile certs ...
	I0416 16:20:37.086449   11739 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.key
	I0416 16:20:37.086469   11739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt with IP's: []
	I0416 16:20:37.227588   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt ...
	I0416 16:20:37.227622   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: {Name:mk53eab5e711f42ef1130930a40f74027d4f6ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.227785   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.key ...
	I0416 16:20:37.227797   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.key: {Name:mk3618038a8a8e5bd434236ab70706479010e8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.227863   11739 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key.16663f7b
	I0416 16:20:37.227880   11739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt.16663f7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.247]
	I0416 16:20:37.421130   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt.16663f7b ...
	I0416 16:20:37.421177   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt.16663f7b: {Name:mk75d1a57a155081891bfb12a29f30816b216c63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.421377   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key.16663f7b ...
	I0416 16:20:37.421396   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key.16663f7b: {Name:mk34e1b3ac76f04cea4f014be3a40a6a2b0e8fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.421502   11739 certs.go:381] copying /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt.16663f7b -> /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt
	I0416 16:20:37.421594   11739 certs.go:385] copying /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key.16663f7b -> /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key
	I0416 16:20:37.421676   11739 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.key
	I0416 16:20:37.421702   11739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.crt with IP's: []
	I0416 16:20:37.509226   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.crt ...
	I0416 16:20:37.509262   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.crt: {Name:mk9c3d287b8db878b1aacc52c4081f33bf154aa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.509455   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.key ...
	I0416 16:20:37.509471   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.key: {Name:mk03540289e6f1ad0891e734700dfcb3b7e40690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.509696   11739 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 16:20:37.509790   11739 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem (1078 bytes)
	I0416 16:20:37.509836   11739 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:20:37.509872   11739 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/key.pem (1675 bytes)
	I0416 16:20:37.510443   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:20:37.542421   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0416 16:20:37.571601   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:20:37.600692   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 16:20:37.630011   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0416 16:20:37.659047   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:20:37.688474   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:20:37.717223   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:20:37.745401   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:20:37.774273   11739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:20:37.794785   11739 ssh_runner.go:195] Run: openssl version
	I0416 16:20:37.801401   11739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:20:37.815243   11739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:37.821066   11739 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:37.821158   11739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:37.827739   11739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:20:37.841521   11739 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:20:37.846661   11739 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:20:37.846707   11739 kubeadm.go:391] StartCluster: {Name:addons-012036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:20:37.846811   11739 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0416 16:20:37.846879   11739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:20:37.887029   11739 cri.go:89] found id: ""
	I0416 16:20:37.887113   11739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:20:37.899452   11739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:20:37.911686   11739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:20:37.923689   11739 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:20:37.923708   11739 kubeadm.go:156] found existing configuration files:
	
	I0416 16:20:37.923776   11739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:20:37.935176   11739 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:20:37.935233   11739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:20:37.947041   11739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:20:37.958616   11739 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:20:37.958688   11739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:20:37.970641   11739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:20:37.981907   11739 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:20:37.981976   11739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:20:37.993821   11739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:20:38.005466   11739 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:20:38.005529   11739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:20:38.017545   11739 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:20:38.072757   11739 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:20:38.072826   11739 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:20:38.253339   11739 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:20:38.253522   11739 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:20:38.253650   11739 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:20:38.507089   11739 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:20:38.510914   11739 out.go:204]   - Generating certificates and keys ...
	I0416 16:20:38.511035   11739 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:20:38.511162   11739 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:20:38.786353   11739 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:20:39.123890   11739 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:20:39.267597   11739 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:20:39.408043   11739 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:20:39.763797   11739 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:20:39.764066   11739 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-012036 localhost] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0416 16:20:40.136094   11739 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:20:40.136491   11739 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-012036 localhost] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0416 16:20:40.385981   11739 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:20:40.530577   11739 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:20:40.767039   11739 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:20:40.767418   11739 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:20:40.952976   11739 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:20:41.047614   11739 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:20:41.176543   11739 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:20:41.258363   11739 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:20:41.546069   11739 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:20:41.546780   11739 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:20:41.549402   11739 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:20:41.551622   11739 out.go:204]   - Booting up control plane ...
	I0416 16:20:41.551758   11739 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:20:41.552577   11739 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:20:41.553475   11739 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:20:41.572407   11739 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:20:41.575256   11739 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:20:41.575647   11739 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:20:41.717464   11739 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:20:48.218776   11739 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502352 seconds
	I0416 16:20:48.234635   11739 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:20:48.260642   11739 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:20:48.798917   11739 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:20:48.799127   11739 kubeadm.go:309] [mark-control-plane] Marking the node addons-012036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:20:49.316075   11739 kubeadm.go:309] [bootstrap-token] Using token: bz5n4w.4jwc771jzhysl5pt
	I0416 16:20:49.317890   11739 out.go:204]   - Configuring RBAC rules ...
	I0416 16:20:49.318055   11739 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:20:49.324756   11739 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:20:49.339981   11739 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:20:49.344286   11739 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:20:49.349419   11739 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:20:49.355926   11739 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:20:49.376312   11739 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:20:49.652583   11739 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:20:49.732260   11739 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:20:49.736953   11739 kubeadm.go:309] 
	I0416 16:20:49.737046   11739 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:20:49.737059   11739 kubeadm.go:309] 
	I0416 16:20:49.737183   11739 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:20:49.737202   11739 kubeadm.go:309] 
	I0416 16:20:49.737251   11739 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:20:49.737337   11739 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:20:49.737421   11739 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:20:49.737430   11739 kubeadm.go:309] 
	I0416 16:20:49.737514   11739 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:20:49.737535   11739 kubeadm.go:309] 
	I0416 16:20:49.737598   11739 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:20:49.737607   11739 kubeadm.go:309] 
	I0416 16:20:49.737669   11739 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:20:49.737773   11739 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:20:49.737859   11739 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:20:49.737871   11739 kubeadm.go:309] 
	I0416 16:20:49.737994   11739 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:20:49.738117   11739 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:20:49.738126   11739 kubeadm.go:309] 
	I0416 16:20:49.738218   11739 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bz5n4w.4jwc771jzhysl5pt \
	I0416 16:20:49.738335   11739 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3fa17152e5a024a90abff7235e0a39e3f709584e9dfd83eb49506ea6c646c588 \
	I0416 16:20:49.738386   11739 kubeadm.go:309] 	--control-plane 
	I0416 16:20:49.738403   11739 kubeadm.go:309] 
	I0416 16:20:49.738530   11739 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:20:49.738540   11739 kubeadm.go:309] 
	I0416 16:20:49.738656   11739 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bz5n4w.4jwc771jzhysl5pt \
	I0416 16:20:49.738799   11739 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3fa17152e5a024a90abff7235e0a39e3f709584e9dfd83eb49506ea6c646c588 
	I0416 16:20:49.741316   11739 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:20:49.741689   11739 cni.go:84] Creating CNI manager for ""
	I0416 16:20:49.741707   11739 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0416 16:20:49.743815   11739 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 16:20:49.745245   11739 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 16:20:49.774616   11739 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 16:20:49.805560   11739 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:20:49.805607   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:49.805642   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-012036 minikube.k8s.io/updated_at=2024_04_16T16_20_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=addons-012036 minikube.k8s.io/primary=true
	I0416 16:20:49.915016   11739 ops.go:34] apiserver oom_adj: -16
	I0416 16:20:50.077670   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:50.578229   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:51.077988   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:51.578499   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:52.078307   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:52.578373   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:53.078470   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:53.578383   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:54.078540   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:54.578741   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:55.078116   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:55.578325   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:56.077949   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:56.578096   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:57.078592   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:57.577958   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:58.078331   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:58.578091   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:59.078402   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:59.577895   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:00.078117   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:00.578037   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:01.077917   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:01.578705   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:02.078379   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:02.245365   11739 kubeadm.go:1107] duration metric: took 12.439805001s to wait for elevateKubeSystemPrivileges
	W0416 16:21:02.245422   11739 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:21:02.245432   11739 kubeadm.go:393] duration metric: took 24.398727479s to StartCluster
	I0416 16:21:02.245455   11739 settings.go:142] acquiring lock: {Name:mk33f15d448e67a39bb041d9835f1ffaf867de17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:21:02.245609   11739 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:21:02.246096   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/kubeconfig: {Name:mk4033fe222fc9823de19ea06fe9807d5ce31bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:21:02.246354   11739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:21:02.246387   11739 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0416 16:21:02.248506   11739 out.go:177] * Verifying Kubernetes components...
	I0416 16:21:02.246460   11739 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0416 16:21:02.246575   11739 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:21:02.249961   11739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:21:02.249990   11739 addons.go:69] Setting cloud-spanner=true in profile "addons-012036"
	I0416 16:21:02.250010   11739 addons.go:69] Setting default-storageclass=true in profile "addons-012036"
	I0416 16:21:02.250016   11739 addons.go:69] Setting gcp-auth=true in profile "addons-012036"
	I0416 16:21:02.250028   11739 addons.go:69] Setting inspektor-gadget=true in profile "addons-012036"
	I0416 16:21:02.250040   11739 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-012036"
	I0416 16:21:02.250046   11739 mustload.go:65] Loading cluster: addons-012036
	I0416 16:21:02.250054   11739 addons.go:69] Setting helm-tiller=true in profile "addons-012036"
	I0416 16:21:02.250046   11739 addons.go:69] Setting registry=true in profile "addons-012036"
	I0416 16:21:02.250060   11739 addons.go:234] Setting addon inspektor-gadget=true in "addons-012036"
	I0416 16:21:02.250053   11739 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-012036"
	I0416 16:21:02.250075   11739 addons.go:234] Setting addon helm-tiller=true in "addons-012036"
	I0416 16:21:02.250077   11739 addons.go:234] Setting addon registry=true in "addons-012036"
	I0416 16:21:02.250081   11739 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-012036"
	I0416 16:21:02.250093   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250104   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250118   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250178   11739 addons.go:69] Setting volumesnapshots=true in profile "addons-012036"
	I0416 16:21:02.250197   11739 addons.go:234] Setting addon volumesnapshots=true in "addons-012036"
	I0416 16:21:02.250213   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250273   11739 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:21:02.250522   11739 addons.go:69] Setting storage-provisioner=true in profile "addons-012036"
	I0416 16:21:02.250536   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250542   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250545   11739 addons.go:234] Setting addon storage-provisioner=true in "addons-012036"
	I0416 16:21:02.250552   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250553   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250563   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250570   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250573   11739 addons.go:69] Setting metrics-server=true in profile "addons-012036"
	I0416 16:21:02.250573   11739 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-012036"
	I0416 16:21:02.250584   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250593   11739 addons.go:234] Setting addon metrics-server=true in "addons-012036"
	I0416 16:21:02.250596   11739 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-012036"
	I0416 16:21:02.250618   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250625   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250646   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250657   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250673   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250922   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250971   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250987   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250046   11739 addons.go:234] Setting addon cloud-spanner=true in "addons-012036"
	I0416 16:21:02.251059   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250539   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250922   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251218   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.249996   11739 addons.go:69] Setting ingress=true in profile "addons-012036"
	I0416 16:21:02.251262   11739 addons.go:234] Setting addon ingress=true in "addons-012036"
	I0416 16:21:02.250003   11739 addons.go:69] Setting ingress-dns=true in profile "addons-012036"
	I0416 16:21:02.251284   11739 addons.go:234] Setting addon ingress-dns=true in "addons-012036"
	I0416 16:21:02.251317   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.251394   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250004   11739 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-012036"
	I0416 16:21:02.251499   11739 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-012036"
	I0416 16:21:02.251514   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.251531   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.251410   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251550   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.249995   11739 addons.go:69] Setting yakd=true in profile "addons-012036"
	I0416 16:21:02.251633   11739 addons.go:234] Setting addon yakd=true in "addons-012036"
	I0416 16:21:02.251658   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.251662   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251686   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251834   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251858   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251869   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251887   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251987   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.252022   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250955   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.252198   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251162   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251445   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.273724   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0416 16:21:02.276412   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0416 16:21:02.276438   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.277527   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.277688   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.277707   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.278065   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.278085   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.278128   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.278477   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.278858   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.278903   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.279093   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.279112   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.285237   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0416 16:21:02.285832   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.286451   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.286473   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.286886   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.287169   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.289189   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0416 16:21:02.291429   11739 addons.go:234] Setting addon default-storageclass=true in "addons-012036"
	I0416 16:21:02.291481   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.291886   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.291932   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.292193   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I0416 16:21:02.292221   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I0416 16:21:02.292269   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0416 16:21:02.292946   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.293693   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.293714   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.293785   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0416 16:21:02.294257   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.294356   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.294880   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.294921   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.303918   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.303936   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.303970   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0416 16:21:02.304033   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0416 16:21:02.304087   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34307
	I0416 16:21:02.304264   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305289   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305303   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.305317   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305350   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305377   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305387   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305603   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.305850   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.305864   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.305997   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.306006   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.306387   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.306539   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.306557   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.306743   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.307295   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.307314   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.307687   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.307709   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.307877   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.308550   11739 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-012036"
	I0416 16:21:02.308592   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.308833   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.308855   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.309350   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.309365   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.309670   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.309712   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.309851   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.310082   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.310260   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.310432   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.310509   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.310544   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.311157   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.311178   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.311636   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.312245   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.312278   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.312998   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.313366   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.313385   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.327391   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0416 16:21:02.327638   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.335514   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I0416 16:21:02.337663   11739 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0416 16:21:02.339396   11739 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0416 16:21:02.339418   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0416 16:21:02.339446   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.343572   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0416 16:21:02.343754   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.343814   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.343936   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.344043   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.344391   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.344411   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.344489   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.344559   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.344571   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.344576   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.344761   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.345012   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.345073   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.345141   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.345155   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.345676   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.345708   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.345915   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.346077   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.346100   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.346117   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.346338   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.346901   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.346953   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.347044   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I0416 16:21:02.347455   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.347667   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.347761   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I0416 16:21:02.347826   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.347841   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.349756   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0416 16:21:02.351259   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0416 16:21:02.349041   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0416 16:21:02.349084   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.349293   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.351347   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0416 16:21:02.351410   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.352475   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.352497   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.352520   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.352658   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.353090   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.353246   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.353695   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.353917   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.354354   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.354566   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.354809   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.356205   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.356202   11739 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0416 16:21:02.357524   11739 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0416 16:21:02.357538   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0416 16:21:02.357555   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.358866   11739 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0416 16:21:02.357040   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.357611   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.358341   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.358355   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
	I0416 16:21:02.360267   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.360309   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.361620   11739 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:21:02.360872   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.360987   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.361547   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.362233   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.363093   11739 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0416 16:21:02.364417   11739 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0416 16:21:02.364439   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0416 16:21:02.364458   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.363360   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.364525   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.363553   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.363011   11739 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:21:02.363573   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.364024   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.365208   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.366011   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.366415   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.366430   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.366512   11739 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0416 16:21:02.366527   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0416 16:21:02.366545   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.366596   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.367337   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.367401   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.369205   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.369589   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.369627   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.369842   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.369941   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.370207   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.370368   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.370542   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.370871   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.370911   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.371077   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.371240   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.371362   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.371485   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.371835   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43551
	I0416 16:21:02.372316   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44791
	I0416 16:21:02.372767   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.373311   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.373327   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.373766   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.373960   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.375887   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.376479   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.376496   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.376560   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0416 16:21:02.376960   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.377411   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.377547   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.377560   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.377891   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.378166   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.378207   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.378238   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.380326   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.382501   11739 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:21:02.383897   11739 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:21:02.383914   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:21:02.383936   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.386944   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0416 16:21:02.387888   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.387937   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.388591   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.388609   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.388683   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.388696   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.388882   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.388955   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.389501   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.389539   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.389885   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.390129   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.390345   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.392416   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0416 16:21:02.392818   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.393461   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.393477   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.393851   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.394025   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.395430   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0416 16:21:02.395991   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.398224   11739 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0416 16:21:02.396367   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.397289   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0416 16:21:02.399931   11739 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0416 16:21:02.399942   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0416 16:21:02.399962   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.400389   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.400488   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0416 16:21:02.400738   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.400752   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.400893   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.401431   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.401446   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.401701   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.401869   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.402013   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.402027   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.402367   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.402560   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.402584   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.403185   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.403225   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.404717   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.404788   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.404836   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35815
	I0416 16:21:02.405027   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39409
	I0416 16:21:02.405226   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.405309   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.405311   11739 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:21:02.405323   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:21:02.405339   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.405761   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.405864   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.406026   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.406220   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.406286   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.406902   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.406917   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.407050   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.407062   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.407674   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.407730   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.407783   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.408167   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.408472   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.408512   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.408752   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.410823   11739 out.go:177]   - Using image docker.io/registry:2.8.3
	I0416 16:21:02.410096   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.410999   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.413501   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.414615   11739 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0416 16:21:02.415904   11739 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0416 16:21:02.414838   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.415922   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0416 16:21:02.415941   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.414841   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0416 16:21:02.414886   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.416035   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.417441   11739 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0416 16:21:02.416295   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.416577   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.417323   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0416 16:21:02.418789   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.418946   11739 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0416 16:21:02.418958   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0416 16:21:02.418976   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.419259   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.419282   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.419321   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.419581   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.419845   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.420035   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.420150   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.420557   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.420572   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.420687   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.421104   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.421423   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.421998   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.422027   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.423072   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.423416   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.423456   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.423617   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.425388   11739 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0416 16:21:02.423643   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.423784   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.423870   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.426545   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43853
	I0416 16:21:02.428314   11739 out.go:177]   - Using image docker.io/busybox:stable
	I0416 16:21:02.427398   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.429762   11739 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0416 16:21:02.429775   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0416 16:21:02.427703   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.429790   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.428266   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.430026   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.431694   11739 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0416 16:21:02.430444   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.430522   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.431575   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0416 16:21:02.432673   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.433117   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.433145   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.433161   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.433290   11739 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 16:21:02.433302   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 16:21:02.433318   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.433488   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.433560   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.433620   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.433845   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.433892   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.434105   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.434117   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.434162   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.434358   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.434640   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.434898   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.436294   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.438347   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0416 16:21:02.436915   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.436940   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.437565   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.439846   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.439873   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.441162   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0416 16:21:02.439954   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.444305   11739 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0416 16:21:02.442751   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.445677   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0416 16:21:02.445690   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0416 16:21:02.445706   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.445732   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0416 16:21:02.447016   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0416 16:21:02.445887   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.448349   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0416 16:21:02.448696   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.449411   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.449787   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0416 16:21:02.449857   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.451123   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.449975   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.451085   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0416 16:21:02.451359   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.452836   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0416 16:21:02.454126   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0416 16:21:02.454145   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0416 16:21:02.454161   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.453010   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.457458   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.457969   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.457992   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.458210   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.458408   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.458572   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.458724   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	W0416 16:21:02.461672   11739 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38990->192.168.39.247:22: read: connection reset by peer
	I0416 16:21:02.461701   11739 retry.go:31] will retry after 189.459023ms: ssh: handshake failed: read tcp 192.168.39.1:38990->192.168.39.247:22: read: connection reset by peer
	W0416 16:21:02.461755   11739 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38994->192.168.39.247:22: read: connection reset by peer
	I0416 16:21:02.461762   11739 retry.go:31] will retry after 218.884854ms: ssh: handshake failed: read tcp 192.168.39.1:38994->192.168.39.247:22: read: connection reset by peer
	I0416 16:21:03.125540   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:21:03.152937   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0416 16:21:03.160973   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:21:03.217662   11739 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0416 16:21:03.217693   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0416 16:21:03.235220   11739 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0416 16:21:03.235240   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0416 16:21:03.271948   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0416 16:21:03.273061   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0416 16:21:03.335410   11739 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0416 16:21:03.335430   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0416 16:21:03.367315   11739 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0416 16:21:03.367343   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0416 16:21:03.383532   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0416 16:21:03.449647   11739 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 16:21:03.449679   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0416 16:21:03.558017   11739 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.31162988s)
	I0416 16:21:03.558124   11739 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.308134967s)
	I0416 16:21:03.558206   11739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:21:03.558259   11739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:21:03.604155   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0416 16:21:03.604185   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0416 16:21:03.675082   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0416 16:21:03.712954   11739 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0416 16:21:03.712991   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0416 16:21:03.725620   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0416 16:21:03.725652   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0416 16:21:03.957137   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0416 16:21:03.986823   11739 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0416 16:21:03.986857   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0416 16:21:04.002037   11739 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0416 16:21:04.002071   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0416 16:21:04.076353   11739 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0416 16:21:04.076384   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0416 16:21:04.109301   11739 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 16:21:04.109336   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 16:21:04.207442   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0416 16:21:04.207470   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0416 16:21:04.285930   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0416 16:21:04.285954   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0416 16:21:04.433178   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0416 16:21:04.447688   11739 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0416 16:21:04.447709   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0416 16:21:04.451980   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0416 16:21:04.452012   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0416 16:21:04.527445   11739 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0416 16:21:04.529677   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0416 16:21:04.535430   11739 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 16:21:04.535457   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 16:21:04.559750   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0416 16:21:04.559776   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0416 16:21:04.632335   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0416 16:21:04.632359   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0416 16:21:04.737197   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 16:21:04.738632   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0416 16:21:04.738649   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0416 16:21:04.851978   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0416 16:21:04.852013   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0416 16:21:04.853932   11739 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0416 16:21:04.853952   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0416 16:21:05.066419   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0416 16:21:05.124481   11739 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:21:05.124509   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0416 16:21:05.289397   11739 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0416 16:21:05.289425   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0416 16:21:05.330380   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0416 16:21:05.330407   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0416 16:21:05.358721   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:21:05.518156   11739 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0416 16:21:05.518181   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0416 16:21:05.666777   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0416 16:21:05.666801   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0416 16:21:05.789309   11739 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0416 16:21:05.789335   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0416 16:21:05.946238   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0416 16:21:05.946272   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0416 16:21:06.086159   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0416 16:21:06.173694   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0416 16:21:06.173728   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0416 16:21:06.569130   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0416 16:21:06.569155   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0416 16:21:06.933508   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0416 16:21:06.933541   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0416 16:21:07.192066   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0416 16:21:08.620731   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.495153458s)
	I0416 16:21:08.620786   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.467813231s)
	I0416 16:21:08.620824   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.620832   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.459833165s)
	I0416 16:21:08.620861   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.620879   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.620836   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.620792   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.620935   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.620972   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.348997284s)
	I0416 16:21:08.621005   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621018   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621032   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.347948607s)
	I0416 16:21:08.621057   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621069   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621349   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621354   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621392   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621409   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.621423   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621435   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621455   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621411   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621502   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621514   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.621524   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621531   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621586   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621607   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.621638   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621653   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621915   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621933   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621938   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621957   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621962   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621964   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.621969   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.622011   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.622031   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.622047   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.622063   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.623108   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.623161   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.623169   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.623294   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.623306   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.623315   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.623323   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.623917   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.623929   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.623965   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.623982   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.623989   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.705212   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.705234   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.705610   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.705630   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:09.229269   11739 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0416 16:21:09.229318   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:09.232687   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:09.233197   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:09.233234   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:09.233439   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:09.233683   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:09.233874   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:09.234078   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:09.678964   11739 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0416 16:21:10.109897   11739 addons.go:234] Setting addon gcp-auth=true in "addons-012036"
	I0416 16:21:10.109953   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:10.110378   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:10.110421   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:10.126412   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0416 16:21:10.126911   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:10.127512   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:10.127542   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:10.127967   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:10.128454   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:10.128487   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:10.145505   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0416 16:21:10.145939   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:10.146434   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:10.146452   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:10.146818   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:10.147052   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:10.148828   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:10.149088   11739 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0416 16:21:10.149117   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:10.151756   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:10.152182   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:10.152207   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:10.152375   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:10.152554   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:10.152707   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:10.152863   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:12.906642   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.523070287s)
	I0416 16:21:12.906687   11739 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.348452998s)
	I0416 16:21:12.906744   11739 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.348457718s)
	I0416 16:21:12.906776   11739 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0416 16:21:12.906694   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.906803   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.906891   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.231777377s)
	I0416 16:21:12.906927   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.906936   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.906935   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.949755196s)
	I0416 16:21:12.906955   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.906972   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907021   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.473810332s)
	I0416 16:21:12.907042   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907051   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907163   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.169939474s)
	I0416 16:21:12.907165   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907185   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907196   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907208   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907217   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.907226   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907234   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907255   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907268   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.840822578s)
	I0416 16:21:12.907279   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907282   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907310   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907287   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.907321   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907327   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907373   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.548614454s)
	W0416 16:21:12.907404   11739 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0416 16:21:12.907425   11739 retry.go:31] will retry after 349.044494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0416 16:21:12.907448   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.82125895s)
	I0416 16:21:12.907470   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907479   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907582   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907616   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907623   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.907633   11739 addons.go:470] Verifying addon ingress=true in "addons-012036"
	I0416 16:21:12.907684   11739 node_ready.go:35] waiting up to 6m0s for node "addons-012036" to be "Ready" ...
	I0416 16:21:12.911340   11739 out.go:177] * Verifying ingress addon...
	I0416 16:21:12.907829   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907847   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907868   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907873   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907892   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907892   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.910021   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.910039   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.910044   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.910043   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.910066   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.910069   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.912892   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.912909   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.912927   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.912941   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.912952   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.912972   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.912980   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.912955   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.912912   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.913059   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.912984   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.913094   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.913100   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.912893   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.913138   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.913146   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.913757   11739 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0416 16:21:12.914881   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.914891   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.914886   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.914891   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.914902   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.914907   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.914910   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.914932   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.914945   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.914949   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.916813   11739 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-012036 service yakd-dashboard -n yakd-dashboard
	
	I0416 16:21:12.914954   11739 addons.go:470] Verifying addon metrics-server=true in "addons-012036"
	I0416 16:21:12.914936   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.914973   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.914976   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.918607   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.918625   11739 addons.go:470] Verifying addon registry=true in "addons-012036"
	I0416 16:21:12.918627   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.920437   11739 out.go:177] * Verifying registry addon...
	I0416 16:21:12.922391   11739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0416 16:21:12.952888   11739 node_ready.go:49] node "addons-012036" has status "Ready":"True"
	I0416 16:21:12.952921   11739 node_ready.go:38] duration metric: took 45.217454ms for node "addons-012036" to be "Ready" ...
	I0416 16:21:12.952933   11739 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:21:12.985362   11739 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0416 16:21:12.985388   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:13.026256   11739 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0416 16:21:13.026277   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:13.064219   11739 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gl82p" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:13.106683   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:13.106721   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:13.107030   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:13.107047   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:13.257166   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:21:13.414506   11739 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-012036" context rescaled to 1 replicas
	I0416 16:21:13.433313   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:13.458987   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:13.918718   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:13.939568   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:14.454317   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:14.455338   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:14.951464   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:14.984151   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:15.007238   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.815108326s)
	I0416 16:21:15.007273   11739 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.858161577s)
	I0416 16:21:15.007303   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:15.007322   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:15.009451   11739 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:21:15.007612   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:15.007648   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:15.011311   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:15.011322   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:15.011329   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:15.013198   11739 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0416 16:21:15.011598   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:15.011628   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:15.014988   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:15.015003   11739 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-012036"
	I0416 16:21:15.015032   11739 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0416 16:21:15.015057   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0416 16:21:15.016775   11739 out.go:177] * Verifying csi-hostpath-driver addon...
	I0416 16:21:15.019486   11739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0416 16:21:15.091562   11739 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0416 16:21:15.091596   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:15.157055   11739 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0416 16:21:15.157080   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0416 16:21:15.205622   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:15.294230   11739 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0416 16:21:15.294255   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0416 16:21:15.452946   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:15.463866   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:15.507523   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0416 16:21:15.533002   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:15.919690   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:15.928648   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:16.025982   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:16.354686   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.097470069s)
	I0416 16:21:16.354749   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:16.354765   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:16.355101   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:16.355117   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:16.355130   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:16.355154   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:16.355157   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:16.355405   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:16.355423   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:16.422759   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:16.428095   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:16.529478   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:16.946125   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:16.951962   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:17.055631   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.548068318s)
	I0416 16:21:17.055685   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:17.055699   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:17.055814   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:17.056015   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:17.056032   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:17.056041   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:17.056049   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:17.056363   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:17.056383   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:17.058704   11739 addons.go:470] Verifying addon gcp-auth=true in "addons-012036"
	I0416 16:21:17.060363   11739 out.go:177] * Verifying gcp-auth addon...
	I0416 16:21:17.062577   11739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0416 16:21:17.084754   11739 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0416 16:21:17.084783   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:17.418498   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:17.429300   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:17.528729   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:17.571335   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:17.575611   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:17.920509   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:17.929016   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:18.026504   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:18.069437   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:18.419342   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:18.428828   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:18.528965   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:18.687813   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:18.918877   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:18.928510   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:19.028855   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:19.066768   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:19.422740   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:19.433720   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:19.527065   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:19.568612   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:19.918988   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:19.929292   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:20.026828   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:20.070029   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:20.072712   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:20.419631   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:20.427038   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:20.526013   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:20.569871   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:20.918782   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:20.929007   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:21.026968   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:21.069316   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:21.419284   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:21.431226   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:21.526116   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:21.569704   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:21.920799   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:21.930105   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:22.026697   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:22.068153   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:22.074572   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:22.419675   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:22.427675   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:22.527464   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:22.586536   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:22.919890   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:22.932507   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:23.029985   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:23.321093   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:23.423645   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:23.433621   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:23.526249   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:23.567208   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:23.919063   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:23.927455   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:24.025889   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:24.067262   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:24.419219   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:24.427363   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:24.526297   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:24.566506   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:24.572700   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:24.920426   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:24.928796   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:25.028600   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:25.071003   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:25.426558   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:25.434100   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:25.544386   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:25.571721   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:25.921057   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:25.929363   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:26.027998   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:26.070517   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:26.419234   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:26.428007   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:26.526370   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:26.568441   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:26.919244   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:26.941230   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:27.027059   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:27.066551   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:27.072762   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:27.419116   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:27.429041   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:27.526043   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:27.571483   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:27.977563   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:27.978471   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:28.026437   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:28.067572   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:28.419017   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:28.428929   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:28.527080   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:28.566312   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:28.920840   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:28.932883   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:29.027460   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:29.066656   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:29.419461   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:29.428031   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:29.528289   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:29.570994   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:29.574892   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:29.918335   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:29.927928   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:30.026471   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:30.069840   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:30.421753   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:30.427721   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:30.540511   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:30.568451   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:30.920571   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:30.928588   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:31.026502   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:31.066534   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:31.428180   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:31.437014   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:31.525627   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:31.569411   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:31.575879   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:31.919571   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:31.927697   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:32.027713   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:32.072358   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:32.418907   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:32.433717   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:32.527771   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:32.605694   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:32.919662   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:32.928640   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:33.028527   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:33.068989   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:33.436061   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:33.439057   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:33.525595   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:33.567846   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:33.919668   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:33.927768   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:34.026260   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:34.068943   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:34.071809   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:34.418935   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:34.429259   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:34.528664   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:34.568883   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:34.918810   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:34.934556   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:35.026726   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:35.067919   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:35.433784   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:35.433951   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:35.528585   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:35.566789   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:35.919488   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:35.928195   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:36.026872   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:36.068872   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:36.072342   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:36.419292   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:36.426967   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:36.533258   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:36.570771   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:36.919626   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:36.927951   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:37.037785   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:37.067052   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:37.422569   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:37.439599   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:37.529498   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:37.567901   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:37.918424   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:37.933129   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:38.026714   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:38.066818   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:38.079951   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:38.420809   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:38.429074   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:38.538509   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:38.567575   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:38.918180   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:38.928164   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:39.026326   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:39.068757   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:39.419843   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:39.428441   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:39.544236   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:39.579130   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:39.918954   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:39.928605   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:40.028791   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:40.069156   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:40.419662   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:40.428726   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:40.531405   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:40.580027   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:40.581930   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.157259   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.158600   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:41.168482   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:41.173406   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:41.418383   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:41.427485   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:41.526159   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:41.568324   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.919921   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:41.929966   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:42.025604   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:42.068805   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:42.419268   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:42.427905   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:42.532156   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:42.599755   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:42.600033   11739 pod_ready.go:92] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.600055   11739 pod_ready.go:81] duration metric: took 29.53580147s for pod "coredns-76f75df574-gl82p" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.600066   11739 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wvjzk" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.606322   11739 pod_ready.go:97] error getting pod "coredns-76f75df574-wvjzk" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-wvjzk" not found
	I0416 16:21:42.606363   11739 pod_ready.go:81] duration metric: took 6.281949ms for pod "coredns-76f75df574-wvjzk" in "kube-system" namespace to be "Ready" ...
	E0416 16:21:42.606377   11739 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-wvjzk" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-wvjzk" not found
	I0416 16:21:42.606386   11739 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.676961   11739 pod_ready.go:92] pod "etcd-addons-012036" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.676988   11739 pod_ready.go:81] duration metric: took 70.59396ms for pod "etcd-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.677001   11739 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.740484   11739 pod_ready.go:92] pod "kube-apiserver-addons-012036" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.740511   11739 pod_ready.go:81] duration metric: took 63.502271ms for pod "kube-apiserver-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.740525   11739 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.763193   11739 pod_ready.go:92] pod "kube-controller-manager-addons-012036" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.763222   11739 pod_ready.go:81] duration metric: took 22.689553ms for pod "kube-controller-manager-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.763240   11739 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6dq9" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.784485   11739 pod_ready.go:92] pod "kube-proxy-s6dq9" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.784518   11739 pod_ready.go:81] duration metric: took 21.270314ms for pod "kube-proxy-s6dq9" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.784530   11739 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.921372   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:42.928974   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:43.027687   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:43.070244   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:43.168434   11739 pod_ready.go:92] pod "kube-scheduler-addons-012036" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:43.168458   11739 pod_ready.go:81] duration metric: took 383.92007ms for pod "kube-scheduler-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.168469   11739 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-75d6c48ddd-rh5ch" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.418971   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:43.428659   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:43.525960   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:43.568354   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:43.570224   11739 pod_ready.go:92] pod "metrics-server-75d6c48ddd-rh5ch" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:43.570255   11739 pod_ready.go:81] duration metric: took 401.77778ms for pod "metrics-server-75d6c48ddd-rh5ch" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.570270   11739 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nwsz2" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.918550   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:43.927375   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:43.968212   11739 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-nwsz2" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:43.968238   11739 pod_ready.go:81] duration metric: took 397.960681ms for pod "nvidia-device-plugin-daemonset-nwsz2" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.968255   11739 pod_ready.go:38] duration metric: took 31.015307403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:21:43.968269   11739 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:21:43.968321   11739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:21:43.998399   11739 api_server.go:72] duration metric: took 41.751973974s to wait for apiserver process to appear ...
	I0416 16:21:43.998429   11739 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:21:43.998451   11739 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I0416 16:21:44.008704   11739 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I0416 16:21:44.011206   11739 api_server.go:141] control plane version: v1.29.3
	I0416 16:21:44.011235   11739 api_server.go:131] duration metric: took 12.80009ms to wait for apiserver health ...
	I0416 16:21:44.011243   11739 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:21:44.045094   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:44.068699   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:44.320585   11739 system_pods.go:59] 18 kube-system pods found
	I0416 16:21:44.320619   11739 system_pods.go:61] "coredns-76f75df574-gl82p" [ce0d912e-d8fc-45eb-a25f-3cdbe67e511c] Running
	I0416 16:21:44.320626   11739 system_pods.go:61] "csi-hostpath-attacher-0" [60a4dcb7-fc8d-45d7-912a-052b70ffedea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0416 16:21:44.320634   11739 system_pods.go:61] "csi-hostpath-resizer-0" [ed11f0c4-aade-4f74-ae20-250260b20010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0416 16:21:44.320643   11739 system_pods.go:61] "csi-hostpathplugin-vfbkp" [6942c4bf-39db-43ca-bf0e-52f91546c9da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0416 16:21:44.320649   11739 system_pods.go:61] "etcd-addons-012036" [501c490e-9df4-4d77-ab24-6b1c484f3f57] Running
	I0416 16:21:44.320654   11739 system_pods.go:61] "kube-apiserver-addons-012036" [a206cfa9-3edb-411e-85d6-c5973862d675] Running
	I0416 16:21:44.320659   11739 system_pods.go:61] "kube-controller-manager-addons-012036" [5efce1ab-3b04-4892-b978-41d3132da3f9] Running
	I0416 16:21:44.320669   11739 system_pods.go:61] "kube-ingress-dns-minikube" [0445f263-dae8-46f5-a610-7bf97d2e8310] Running
	I0416 16:21:44.320678   11739 system_pods.go:61] "kube-proxy-s6dq9" [3870d3d7-c051-4d2c-aaed-8b4e4e59d483] Running
	I0416 16:21:44.320683   11739 system_pods.go:61] "kube-scheduler-addons-012036" [5d9ec397-85be-4b49-934c-bce74b51177d] Running
	I0416 16:21:44.320688   11739 system_pods.go:61] "metrics-server-75d6c48ddd-rh5ch" [dd9e68e9-89db-492e-b995-43adcef90c7b] Running
	I0416 16:21:44.320693   11739 system_pods.go:61] "nvidia-device-plugin-daemonset-nwsz2" [c725f54f-6971-493f-bfd5-62cf6aec55cd] Running
	I0416 16:21:44.320696   11739 system_pods.go:61] "registry-jcxdc" [b635d906-6cfa-4550-af73-b2a6efeed3a1] Running
	I0416 16:21:44.320700   11739 system_pods.go:61] "registry-proxy-vnvqm" [337f4757-d2bc-47a6-a02c-27da4429dc2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0416 16:21:44.320707   11739 system_pods.go:61] "snapshot-controller-58dbcc7b99-dmcpx" [776bbbd0-0b95-4985-8780-201db3bb42a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:44.320718   11739 system_pods.go:61] "snapshot-controller-58dbcc7b99-wr6z2" [213f9675-e555-47a7-82fc-5a5323329e00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:44.320725   11739 system_pods.go:61] "storage-provisioner" [943be509-0cb7-46d3-be2a-414fc7408f93] Running
	I0416 16:21:44.320730   11739 system_pods.go:61] "tiller-deploy-7b677967b9-jqj87" [fa15f4cf-8401-4c01-8f66-8e92e3945327] Running
	I0416 16:21:44.320739   11739 system_pods.go:74] duration metric: took 309.489554ms to wait for pod list to return data ...
	I0416 16:21:44.320749   11739 default_sa.go:34] waiting for default service account to be created ...
	I0416 16:21:44.368246   11739 default_sa.go:45] found service account: "default"
	I0416 16:21:44.368274   11739 default_sa.go:55] duration metric: took 47.515468ms for default service account to be created ...
	I0416 16:21:44.368282   11739 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 16:21:44.423629   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:44.429057   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:44.526289   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:44.566300   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:44.577771   11739 system_pods.go:86] 18 kube-system pods found
	I0416 16:21:44.577814   11739 system_pods.go:89] "coredns-76f75df574-gl82p" [ce0d912e-d8fc-45eb-a25f-3cdbe67e511c] Running
	I0416 16:21:44.577823   11739 system_pods.go:89] "csi-hostpath-attacher-0" [60a4dcb7-fc8d-45d7-912a-052b70ffedea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0416 16:21:44.577831   11739 system_pods.go:89] "csi-hostpath-resizer-0" [ed11f0c4-aade-4f74-ae20-250260b20010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0416 16:21:44.577838   11739 system_pods.go:89] "csi-hostpathplugin-vfbkp" [6942c4bf-39db-43ca-bf0e-52f91546c9da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0416 16:21:44.577844   11739 system_pods.go:89] "etcd-addons-012036" [501c490e-9df4-4d77-ab24-6b1c484f3f57] Running
	I0416 16:21:44.577850   11739 system_pods.go:89] "kube-apiserver-addons-012036" [a206cfa9-3edb-411e-85d6-c5973862d675] Running
	I0416 16:21:44.577857   11739 system_pods.go:89] "kube-controller-manager-addons-012036" [5efce1ab-3b04-4892-b978-41d3132da3f9] Running
	I0416 16:21:44.577864   11739 system_pods.go:89] "kube-ingress-dns-minikube" [0445f263-dae8-46f5-a610-7bf97d2e8310] Running
	I0416 16:21:44.577870   11739 system_pods.go:89] "kube-proxy-s6dq9" [3870d3d7-c051-4d2c-aaed-8b4e4e59d483] Running
	I0416 16:21:44.577876   11739 system_pods.go:89] "kube-scheduler-addons-012036" [5d9ec397-85be-4b49-934c-bce74b51177d] Running
	I0416 16:21:44.577887   11739 system_pods.go:89] "metrics-server-75d6c48ddd-rh5ch" [dd9e68e9-89db-492e-b995-43adcef90c7b] Running
	I0416 16:21:44.577893   11739 system_pods.go:89] "nvidia-device-plugin-daemonset-nwsz2" [c725f54f-6971-493f-bfd5-62cf6aec55cd] Running
	I0416 16:21:44.577903   11739 system_pods.go:89] "registry-jcxdc" [b635d906-6cfa-4550-af73-b2a6efeed3a1] Running
	I0416 16:21:44.577915   11739 system_pods.go:89] "registry-proxy-vnvqm" [337f4757-d2bc-47a6-a02c-27da4429dc2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0416 16:21:44.577927   11739 system_pods.go:89] "snapshot-controller-58dbcc7b99-dmcpx" [776bbbd0-0b95-4985-8780-201db3bb42a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:44.577937   11739 system_pods.go:89] "snapshot-controller-58dbcc7b99-wr6z2" [213f9675-e555-47a7-82fc-5a5323329e00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:44.577945   11739 system_pods.go:89] "storage-provisioner" [943be509-0cb7-46d3-be2a-414fc7408f93] Running
	I0416 16:21:44.577950   11739 system_pods.go:89] "tiller-deploy-7b677967b9-jqj87" [fa15f4cf-8401-4c01-8f66-8e92e3945327] Running
	I0416 16:21:44.577961   11739 system_pods.go:126] duration metric: took 209.673583ms to wait for k8s-apps to be running ...
	I0416 16:21:44.577971   11739 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 16:21:44.578031   11739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:21:44.596715   11739 system_svc.go:56] duration metric: took 18.736097ms WaitForService to wait for kubelet
	I0416 16:21:44.596755   11739 kubeadm.go:576] duration metric: took 42.350333594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:21:44.596781   11739 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:21:44.769176   11739 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:21:44.769208   11739 node_conditions.go:123] node cpu capacity is 2
	I0416 16:21:44.769219   11739 node_conditions.go:105] duration metric: took 172.432936ms to run NodePressure ...
	I0416 16:21:44.769230   11739 start.go:240] waiting for startup goroutines ...
	I0416 16:21:44.918938   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:44.928007   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:45.026009   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:45.067067   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:45.420070   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:45.433085   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:45.526156   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:45.567468   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:45.919498   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:45.928749   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:46.034305   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:46.067107   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:46.423238   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:46.429876   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:46.540331   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:46.573821   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:46.924514   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:46.930047   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:47.035215   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:47.068568   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:47.419016   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:47.429188   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:47.542077   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:47.567311   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:47.920095   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:47.929300   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:48.032233   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:48.067144   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:48.419797   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:48.429324   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:48.533309   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:48.567740   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:48.920755   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:48.928808   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:49.028510   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:49.071276   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:49.422874   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:49.433560   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:49.563574   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:49.567409   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:49.918647   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:49.929876   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:50.031612   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:50.067663   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:50.419419   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:50.427666   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:50.526517   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:50.567605   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:50.920134   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:50.935196   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:51.027228   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:51.068506   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:51.419537   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:51.428296   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:51.527417   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:51.566988   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:51.918679   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:51.928245   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:52.026395   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:52.067348   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:52.608007   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:52.609694   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:52.610067   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:52.611916   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:52.921796   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:52.927999   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:53.029758   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:53.068596   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:53.421296   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:53.428034   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:53.526984   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:53.571293   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:53.918982   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:53.928550   11739 kapi.go:107] duration metric: took 41.006157226s to wait for kubernetes.io/minikube-addons=registry ...
	I0416 16:21:54.027826   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:54.066273   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:54.420492   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:54.527817   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:54.566925   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:54.919980   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:55.026302   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:55.067536   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:55.422916   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:55.528943   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:55.569114   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:55.919250   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:56.028236   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:56.066943   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:56.418523   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:56.528961   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:56.573769   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.095128   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.096024   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:57.099583   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:57.419282   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:57.526351   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:57.567086   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.918654   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:58.026864   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:58.066933   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:58.421658   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:58.532139   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:58.567278   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:58.920580   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:59.026632   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:59.067998   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:59.418818   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:59.533691   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:59.567356   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:59.918597   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:00.026652   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:00.068887   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:00.418676   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:00.527773   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:00.567245   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:00.919363   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:01.025654   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:01.067407   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:01.420162   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:01.657351   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:01.666918   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:01.918890   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:02.027098   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:02.068887   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:02.426828   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:02.536138   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:02.569596   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:02.922699   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:03.026164   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:03.067186   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:03.422306   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:03.526396   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:03.566535   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:03.919387   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:04.028116   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:04.066990   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:04.419072   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:04.526527   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:04.567490   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:04.926334   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:05.026091   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:05.068499   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:05.420077   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:05.526427   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:05.566637   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:05.919430   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:06.025799   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:06.075097   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:06.428150   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:06.533086   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:06.567956   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:06.919028   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:07.026475   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:07.066433   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:07.420959   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:07.526043   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:07.566841   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:07.918663   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:08.026995   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:08.067000   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:08.418656   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:08.525750   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:08.568222   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:08.923294   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:09.026362   11739 kapi.go:107] duration metric: took 54.006877276s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0416 16:22:09.066971   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:09.421417   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:09.567581   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:09.920236   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:10.067033   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:10.418970   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:10.567521   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:10.921107   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:11.067581   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:11.419399   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:11.567860   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:11.919496   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:12.067248   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:12.420964   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:12.567742   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:12.918725   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:13.067011   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:13.419164   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:13.568325   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:13.919626   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:14.068051   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:14.418694   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:14.568097   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:14.920642   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:15.067859   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:15.418672   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:15.567575   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:15.923572   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:16.067436   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:16.419522   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:16.567392   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:16.919809   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:17.066983   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:17.433300   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:17.567843   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:18.217053   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:18.221262   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:18.421781   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:18.567043   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:18.919170   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:19.067583   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:19.424987   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:19.567389   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:19.920642   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:20.067729   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:20.423612   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:20.566692   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:20.923986   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:21.068049   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:21.449608   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:21.566403   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:21.919594   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:22.066559   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:22.421085   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:22.567447   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:22.920995   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:23.067623   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:23.442449   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:23.570662   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:23.976244   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:24.088177   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:24.422768   11739 kapi.go:107] duration metric: took 1m11.509008192s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0416 16:22:24.574530   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:25.067008   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:25.566259   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:26.068803   11739 kapi.go:107] duration metric: took 1m9.006220739s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0416 16:22:26.070824   11739 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-012036 cluster.
	I0416 16:22:26.072285   11739 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0416 16:22:26.073800   11739 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0416 16:22:26.075306   11739 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, helm-tiller, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0416 16:22:26.076737   11739 addons.go:505] duration metric: took 1m23.830287578s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass helm-tiller metrics-server yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0416 16:22:26.076791   11739 start.go:245] waiting for cluster config update ...
	I0416 16:22:26.076808   11739 start.go:254] writing updated cluster config ...
	I0416 16:22:26.077064   11739 ssh_runner.go:195] Run: rm -f paused
	I0416 16:22:26.137028   11739 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 16:22:26.139159   11739 out.go:177] * Done! kubectl is now configured to use "addons-012036" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD
	e6e500e0cfb34       dd1b12fcb6097       Less than a second ago   Created             hello-world-app                          0                   95221feb08a59       hello-world-app-5d77478584-9rvj4
	503c45024e017       e289a478ace02       11 seconds ago           Running             nginx                                    0                   4f7f5d61c0843       nginx
	ed6ddd1e144f3       7373e995f4086       13 seconds ago           Running             headlamp                                 0                   0ce3daf3e73e6       headlamp-5b77dbd7c4-z758s
	e52e5d108177b       a416a98b71e22       15 seconds ago           Exited              helper-pod                               0                   da673e61d30c1       helper-pod-delete-pvc-8f41ec9b-ffc7-4a6a-90f0-74da7d87242a
	8458c44bf38f0       ba5dc23f65d4c       19 seconds ago           Exited              busybox                                  0                   1c5c3aa926f39       test-local-path
	64447d010527c       db2fc13d44d50       42 seconds ago           Running             gcp-auth                                 0                   a38442e42d2f9       gcp-auth-7d69788767-6prgz
	a344f59b6b138       ffcc66479b5ba       44 seconds ago           Running             controller                               0                   0bdec488f9cff       ingress-nginx-controller-65496f9567-88dw2
	b6246028475c8       e255e073c508c       About a minute ago       Exited              hostpath                                 0                   72d9cfa98f2eb       csi-hostpathplugin-vfbkp
	9ed676fde3924       88ef14a257f42       About a minute ago       Exited              node-driver-registrar                    0                   72d9cfa98f2eb       csi-hostpathplugin-vfbkp
	93b2144a44fd3       19a639eda60f0       About a minute ago       Exited              csi-resizer                              0                   b86d5ac9a2861       csi-hostpath-resizer-0
	704964b5972d3       a1ed5895ba635       About a minute ago       Exited              csi-external-health-monitor-controller   0                   72d9cfa98f2eb       csi-hostpathplugin-vfbkp
	86f1572e10b06       59cbb42146a37       About a minute ago       Exited              csi-attacher                             0                   a4a08f51702c8       csi-hostpath-attacher-0
	5a2c5d1c2d8f8       b29d748098e32       About a minute ago       Exited              patch                                    0                   cd5b763a125cb       ingress-nginx-admission-patch-kscqv
	bf89a3e6bbb5d       b29d748098e32       About a minute ago       Exited              create                                   0                   d9d1f31083959       ingress-nginx-admission-create-zpdtd
	3c4ada40b02b1       aa61ee9c70bc4       About a minute ago       Running             volume-snapshot-controller               0                   0d585ad0b737c       snapshot-controller-58dbcc7b99-wr6z2
	c25c9e32964c8       aa61ee9c70bc4       About a minute ago       Running             volume-snapshot-controller               0                   66c66b59d6426       snapshot-controller-58dbcc7b99-dmcpx
	bba135a45c3af       e16d1e3a10667       About a minute ago       Running             local-path-provisioner                   0                   7f73a664b403d       local-path-provisioner-78b46b4d5c-fgvc7
	70f2718c1ab6c       31de47c733c91       About a minute ago       Running             yakd                                     0                   e355a26fbd015       yakd-dashboard-9947fc6bf-knpbf
	0830eb1f2606b       3f39089e90831       About a minute ago       Running             tiller                                   0                   56822ad5c90e2       tiller-deploy-7b677967b9-jqj87
	d754b9971ad2d       6e38f40d628db       About a minute ago       Running             storage-provisioner                      0                   679c17820d273       storage-provisioner
	f7179288f854b       cbb01a7bd410d       2 minutes ago            Running             coredns                                  0                   61d82bf55b66d       coredns-76f75df574-gl82p
	b656b7633700b       a1d263b5dc5b0       2 minutes ago            Running             kube-proxy                               0                   2d6ab0273ee54       kube-proxy-s6dq9
	24af4e069b22f       8c390d98f50c0       2 minutes ago            Running             kube-scheduler                           0                   dbf77639f3fd7       kube-scheduler-addons-012036
	48a1e53b66a23       39f995c9f1996       2 minutes ago            Running             kube-apiserver                           0                   09c33e1ba2865       kube-apiserver-addons-012036
	085bd521d80e6       3861cfcd7c04c       2 minutes ago            Running             etcd                                     0                   3472a3055087b       etcd-addons-012036
	87ef232e07b96       6052a25da3f97       2 minutes ago            Running             kube-controller-manager                  0                   fc66104249ac6       kube-controller-manager-addons-012036
	
	
	==> containerd <==
	Apr 16 16:23:06 addons-012036 containerd[649]: time="2024-04-16T16:23:06.817899732Z" level=info msg="TearDown network for sandbox \"a4a08f51702c8e7c5b8049acc4214bad2a87c3c7fabcfe2c7122aff76cb50438\" successfully"
	Apr 16 16:23:06 addons-012036 containerd[649]: time="2024-04-16T16:23:06.818230820Z" level=info msg="StopPodSandbox for \"a4a08f51702c8e7c5b8049acc4214bad2a87c3c7fabcfe2c7122aff76cb50438\" returns successfully"
	Apr 16 16:23:06 addons-012036 containerd[649]: time="2024-04-16T16:23:06.904111431Z" level=info msg="TearDown network for sandbox \"72d9cfa98f2eb6492669542c9c3b48419eb5c4bcabe6648d40bea4c52f1cc094\" successfully"
	Apr 16 16:23:06 addons-012036 containerd[649]: time="2024-04-16T16:23:06.904174761Z" level=info msg="StopPodSandbox for \"72d9cfa98f2eb6492669542c9c3b48419eb5c4bcabe6648d40bea4c52f1cc094\" returns successfully"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.001793352Z" level=info msg="TearDown network for sandbox \"b86d5ac9a286145f4625b3d94219107c59ea51f4c98f34894cafad0dd4c37354\" successfully"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.001862526Z" level=info msg="StopPodSandbox for \"b86d5ac9a286145f4625b3d94219107c59ea51f4c98f34894cafad0dd4c37354\" returns successfully"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.450319053Z" level=info msg="ImageCreate event name:\"gcr.io/google-samples/hello-app:1.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.452054813Z" level=info msg="stop pulling image gcr.io/google-samples/hello-app:1.0: active requests=0, bytes read=13620577"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.454838292Z" level=info msg="ImageCreate event name:\"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.459344037Z" level=info msg="ImageCreate event name:\"gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.459830676Z" level=info msg="Pulled image \"gcr.io/google-samples/hello-app:1.0\" with image id \"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\", repo tag \"gcr.io/google-samples/hello-app:1.0\", repo digest \"gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7\", size \"13745365\" in 3.388756174s"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.459899295Z" level=info msg="PullImage \"gcr.io/google-samples/hello-app:1.0\" returns image reference \"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\""
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.465081645Z" level=info msg="CreateContainer within sandbox \"95221feb08a59051dc24ce37b54b649f3434412c0aee291e46d96d0928faaf1a\" for container &ContainerMetadata{Name:hello-world-app,Attempt:0,}"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.495958042Z" level=info msg="CreateContainer within sandbox \"95221feb08a59051dc24ce37b54b649f3434412c0aee291e46d96d0928faaf1a\" for &ContainerMetadata{Name:hello-world-app,Attempt:0,} returns container id \"e6e500e0cfb34675cba532d5cac78f0eac98f3e495424894449dae2fca52b0be\""
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.497261284Z" level=info msg="StartContainer for \"e6e500e0cfb34675cba532d5cac78f0eac98f3e495424894449dae2fca52b0be\""
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.547495996Z" level=info msg="RemoveContainer for \"b599890fb9c034735fb9f5964f815268eb07ef30ac8729517f00fa72d6109696\""
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.577301026Z" level=info msg="RemoveContainer for \"b599890fb9c034735fb9f5964f815268eb07ef30ac8729517f00fa72d6109696\" returns successfully"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.595702274Z" level=info msg="RemoveContainer for \"6dcb42bc8b7b8829634f03ba603a3768ca32d9af9abd13a9147ddb3658c72b8f\""
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.615857773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-test,Uid:4ad3331b-c57e-449b-a159-9d2f3a9ecabf,Namespace:kube-system,Attempt:0,}"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.639483922Z" level=info msg="RemoveContainer for \"6dcb42bc8b7b8829634f03ba603a3768ca32d9af9abd13a9147ddb3658c72b8f\" returns successfully"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.641714146Z" level=info msg="RemoveContainer for \"2a1e553761953c4abc10789770687355ba5d2a4b6d770e53e35ebf1b3aa0bb96\""
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.658947522Z" level=info msg="RemoveContainer for \"2a1e553761953c4abc10789770687355ba5d2a4b6d770e53e35ebf1b3aa0bb96\" returns successfully"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.686584987Z" level=info msg="RemoveContainer for \"b6246028475c81eacba55f063c21da9c4c960dd83314f5fd9af2137d2835d32c\""
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.726947589Z" level=info msg="RemoveContainer for \"b6246028475c81eacba55f063c21da9c4c960dd83314f5fd9af2137d2835d32c\" returns successfully"
	Apr 16 16:23:07 addons-012036 containerd[649]: time="2024-04-16T16:23:07.748885235Z" level=info msg="RemoveContainer for \"9ed676fde39246300ab97468d5587ac60c654caf1552c5e83d60f7b7cfe1aef7\""
	
	
	==> coredns [f7179288f854b31cc4cbdd569bfcd28c058e519f2bf3526e9928a17684729742] <==
	[INFO] 10.244.0.21:50027 - 47550 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000095186s
	[INFO] 10.244.0.21:50027 - 60337 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000145331s
	[INFO] 10.244.0.21:48395 - 33898 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098138s
	[INFO] 10.244.0.21:50027 - 27296 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104926s
	[INFO] 10.244.0.21:48395 - 49288 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103582s
	[INFO] 10.244.0.21:50027 - 35119 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135435s
	[INFO] 10.244.0.21:48395 - 39620 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057238s
	[INFO] 10.244.0.21:48395 - 25647 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100482s
	[INFO] 10.244.0.21:48395 - 14952 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008583s
	[INFO] 10.244.0.21:48395 - 9796 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046317s
	[INFO] 10.244.0.21:48395 - 62384 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000133254s
	[INFO] 10.244.0.21:55726 - 8178 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000162832s
	[INFO] 10.244.0.21:52877 - 53045 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000140694s
	[INFO] 10.244.0.21:52877 - 32168 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075802s
	[INFO] 10.244.0.21:55726 - 17601 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104234s
	[INFO] 10.244.0.21:52877 - 9948 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078815s
	[INFO] 10.244.0.21:55726 - 20915 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000218449s
	[INFO] 10.244.0.21:52877 - 34362 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077228s
	[INFO] 10.244.0.21:55726 - 11535 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062301s
	[INFO] 10.244.0.21:52877 - 36745 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000122211s
	[INFO] 10.244.0.21:55726 - 30739 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087967s
	[INFO] 10.244.0.21:55726 - 28953 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000184869s
	[INFO] 10.244.0.21:52877 - 29987 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00012142s
	[INFO] 10.244.0.21:55726 - 65482 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000248392s
	[INFO] 10.244.0.21:52877 - 6403 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000310284s
	
	
	==> describe nodes <==
	Name:               addons-012036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-012036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=addons-012036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_20_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-012036
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:20:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-012036
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:23:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:22:52 +0000   Tue, 16 Apr 2024 16:20:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:22:52 +0000   Tue, 16 Apr 2024 16:20:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:22:52 +0000   Tue, 16 Apr 2024 16:20:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:22:52 +0000   Tue, 16 Apr 2024 16:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    addons-012036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 708d4aa12b0c448c993837b39a2c42f7
	  System UUID:                708d4aa1-2b0c-448c-9938-37b39a2c42f7
	  Boot ID:                    879a873f-bc9d-45b9-9166-b9cec81a5e41
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.15
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-9rvj4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  gcp-auth                    gcp-auth-7d69788767-6prgz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  headlamp                    headlamp-5b77dbd7c4-z758s                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  ingress-nginx               ingress-nginx-controller-65496f9567-88dw2    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         116s
	  kube-system                 coredns-76f75df574-gl82p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m6s
	  kube-system                 etcd-addons-012036                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m18s
	  kube-system                 helm-test                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kube-system                 kube-apiserver-addons-012036                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-controller-manager-addons-012036        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-proxy-s6dq9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-scheduler-addons-012036                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 snapshot-controller-58dbcc7b99-dmcpx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 snapshot-controller-58dbcc7b99-wr6z2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 tiller-deploy-7b677967b9-jqj87               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-fgvc7      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-knpbf               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m4s   kube-proxy       
	  Normal  Starting                 2m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m19s  kubelet          Node addons-012036 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m18s  kubelet          Node addons-012036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s  kubelet          Node addons-012036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s  kubelet          Node addons-012036 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m7s   node-controller  Node addons-012036 event: Registered Node addons-012036 in Controller
	
	
	==> dmesg <==
	[  +0.653985] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +5.040001] systemd-fstab-generator[865]: Ignoring "noauto" option for root device
	[  +0.059667] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.075584] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.684265] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[Apr16 16:21] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +0.156476] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.110160] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.176625] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.021541] kauditd_printk_skb: 136 callbacks suppressed
	[  +7.276636] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.291752] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.080528] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.788211] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.199215] kauditd_printk_skb: 31 callbacks suppressed
	[Apr16 16:22] kauditd_printk_skb: 67 callbacks suppressed
	[ +11.731376] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.500462] kauditd_printk_skb: 10 callbacks suppressed
	[ +10.786935] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.117266] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.009410] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.018005] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.292857] kauditd_printk_skb: 56 callbacks suppressed
	[Apr16 16:23] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.003257] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [085bd521d80e689ee6adf7cb8b640371281a985e7349716003c1f7dc08415dac] <==
	{"level":"info","ts":"2024-04-16T16:21:55.806515Z","caller":"traceutil/trace.go:171","msg":"trace[86216786] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:986; }","duration":"135.544279ms","start":"2024-04-16T16:21:55.670961Z","end":"2024-04-16T16:21:55.806505Z","steps":["trace[86216786] 'agreement among raft nodes before linearized reading'  (duration: 135.163376ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:21:55.80686Z","caller":"traceutil/trace.go:171","msg":"trace[1785917352] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"221.351087ms","start":"2024-04-16T16:21:55.585501Z","end":"2024-04-16T16:21:55.806852Z","steps":["trace[1785917352] 'process raft request'  (duration: 220.371284ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:21:57.074203Z","caller":"traceutil/trace.go:171","msg":"trace[684185531] transaction","detail":"{read_only:false; response_revision:1000; number_of_response:1; }","duration":"297.001124ms","start":"2024-04-16T16:21:56.777185Z","end":"2024-04-16T16:21:57.074186Z","steps":["trace[684185531] 'process raft request'  (duration: 296.704957ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:21:57.075215Z","caller":"traceutil/trace.go:171","msg":"trace[1690324089] linearizableReadLoop","detail":"{readStateIndex:1028; appliedIndex:1028; }","duration":"208.218349ms","start":"2024-04-16T16:21:56.866799Z","end":"2024-04-16T16:21:57.075018Z","steps":["trace[1690324089] 'read index received'  (duration: 208.211566ms)","trace[1690324089] 'applied index is now lower than readState.Index'  (duration: 5.768µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T16:21:57.076818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.486231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-04-16T16:21:57.077352Z","caller":"traceutil/trace.go:171","msg":"trace[348949161] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1000; }","duration":"167.048907ms","start":"2024-04-16T16:21:56.91026Z","end":"2024-04-16T16:21:57.077309Z","steps":["trace[348949161] 'agreement among raft nodes before linearized reading'  (duration: 165.485172ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:21:57.078561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.701517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-16T16:21:57.078917Z","caller":"traceutil/trace.go:171","msg":"trace[939727194] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:1000; }","duration":"212.120941ms","start":"2024-04-16T16:21:56.866786Z","end":"2024-04-16T16:21:57.078907Z","steps":["trace[939727194] 'agreement among raft nodes before linearized reading'  (duration: 211.681702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:01.644395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.645477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/gcp-auth-webhook-cfg\" ","response":"range_response_count:1 size:2695"}
	{"level":"info","ts":"2024-04-16T16:22:01.644495Z","caller":"traceutil/trace.go:171","msg":"trace[375487526] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/gcp-auth-webhook-cfg; range_end:; response_count:1; response_revision:1042; }","duration":"205.784243ms","start":"2024-04-16T16:22:01.438691Z","end":"2024-04-16T16:22:01.644475Z","steps":["trace[375487526] 'range keys from in-memory index tree'  (duration: 205.320252ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:01.645043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.430092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85732"}
	{"level":"info","ts":"2024-04-16T16:22:01.645122Z","caller":"traceutil/trace.go:171","msg":"trace[345045695] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1042; }","duration":"128.536208ms","start":"2024-04-16T16:22:01.516574Z","end":"2024-04-16T16:22:01.645111Z","steps":["trace[345045695] 'range keys from in-memory index tree'  (duration: 128.185653ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:18.205899Z","caller":"traceutil/trace.go:171","msg":"trace[1331830235] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1155; }","duration":"294.845542ms","start":"2024-04-16T16:22:17.911021Z","end":"2024-04-16T16:22:18.205867Z","steps":["trace[1331830235] 'read index received'  (duration: 294.508188ms)","trace[1331830235] 'applied index is now lower than readState.Index'  (duration: 336.55µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T16:22:18.206056Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.45033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.247\" ","response":"range_response_count:1 size:135"}
	{"level":"warn","ts":"2024-04-16T16:22:18.206058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.027616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14397"}
	{"level":"info","ts":"2024-04-16T16:22:18.206096Z","caller":"traceutil/trace.go:171","msg":"trace[1365484486] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1123; }","duration":"295.091169ms","start":"2024-04-16T16:22:17.910995Z","end":"2024-04-16T16:22:18.206087Z","steps":["trace[1365484486] 'agreement among raft nodes before linearized reading'  (duration: 294.975107ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:18.206132Z","caller":"traceutil/trace.go:171","msg":"trace[431793900] range","detail":"{range_begin:/registry/masterleases/192.168.39.247; range_end:; response_count:1; response_revision:1123; }","duration":"282.488416ms","start":"2024-04-16T16:22:17.923581Z","end":"2024-04-16T16:22:18.20607Z","steps":["trace[431793900] 'agreement among raft nodes before linearized reading'  (duration: 282.395538ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:18.206232Z","caller":"traceutil/trace.go:171","msg":"trace[1813769361] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"390.254571ms","start":"2024-04-16T16:22:17.815969Z","end":"2024-04-16T16:22:18.206224Z","steps":["trace[1813769361] 'process raft request'  (duration: 389.602067ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:18.206303Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T16:22:17.815941Z","time spent":"390.315425ms","remote":"127.0.0.1:56522","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":793,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-ps4j5.17c6cf272aec7b6b\" mod_revision:954 > success:<request_put:<key:\"/registry/events/gadget/gadget-ps4j5.17c6cf272aec7b6b\" value_size:722 lease:5145801175088306029 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-ps4j5.17c6cf272aec7b6b\" > >"}
	{"level":"warn","ts":"2024-04-16T16:22:18.206338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.958838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-04-16T16:22:18.206358Z","caller":"traceutil/trace.go:171","msg":"trace[43758733] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1123; }","duration":"148.004694ms","start":"2024-04-16T16:22:18.058348Z","end":"2024-04-16T16:22:18.206353Z","steps":["trace[43758733] 'agreement among raft nodes before linearized reading'  (duration: 147.934703ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:23.965517Z","caller":"traceutil/trace.go:171","msg":"trace[1324930744] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"104.133832ms","start":"2024-04-16T16:22:23.861368Z","end":"2024-04-16T16:22:23.965502Z","steps":["trace[1324930744] 'process raft request'  (duration: 103.393479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:38.257919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.363576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/gadget/kube-root-ca.crt\" ","response":"range_response_count:1 size:1740"}
	{"level":"info","ts":"2024-04-16T16:22:38.257965Z","caller":"traceutil/trace.go:171","msg":"trace[1838935135] range","detail":"{range_begin:/registry/configmaps/gadget/kube-root-ca.crt; range_end:; response_count:1; response_revision:1282; }","duration":"257.442522ms","start":"2024-04-16T16:22:38.000512Z","end":"2024-04-16T16:22:38.257955Z","steps":["trace[1838935135] 'range keys from in-memory index tree'  (duration: 257.248177ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:38.260001Z","caller":"traceutil/trace.go:171","msg":"trace[68077669] transaction","detail":"{read_only:false; response_revision:1283; number_of_response:1; }","duration":"101.801907ms","start":"2024-04-16T16:22:38.156701Z","end":"2024-04-16T16:22:38.258503Z","steps":["trace[68077669] 'process raft request'  (duration: 101.695054ms)"],"step_count":1}
	
	
	==> gcp-auth [64447d010527c0dc5f9323ff44c5d2b5e3dfa7a6f0799c0bc2e216129458b8e5] <==
	2024/04/16 16:22:25 GCP Auth Webhook started!
	2024/04/16 16:22:33 Ready to marshal response ...
	2024/04/16 16:22:33 Ready to write response ...
	2024/04/16 16:22:37 Ready to marshal response ...
	2024/04/16 16:22:37 Ready to write response ...
	2024/04/16 16:22:38 Ready to marshal response ...
	2024/04/16 16:22:38 Ready to write response ...
	2024/04/16 16:22:38 Ready to marshal response ...
	2024/04/16 16:22:38 Ready to write response ...
	2024/04/16 16:22:46 Ready to marshal response ...
	2024/04/16 16:22:46 Ready to write response ...
	2024/04/16 16:22:46 Ready to marshal response ...
	2024/04/16 16:22:46 Ready to write response ...
	2024/04/16 16:22:46 Ready to marshal response ...
	2024/04/16 16:22:46 Ready to write response ...
	2024/04/16 16:22:50 Ready to marshal response ...
	2024/04/16 16:22:50 Ready to write response ...
	2024/04/16 16:22:50 Ready to marshal response ...
	2024/04/16 16:22:50 Ready to write response ...
	2024/04/16 16:22:51 Ready to marshal response ...
	2024/04/16 16:22:51 Ready to write response ...
	2024/04/16 16:23:03 Ready to marshal response ...
	2024/04/16 16:23:03 Ready to write response ...
	2024/04/16 16:23:07 Ready to marshal response ...
	2024/04/16 16:23:07 Ready to write response ...
	
	
	==> kernel <==
	 16:23:08 up 3 min,  0 users,  load average: 3.10, 1.80, 0.72
	Linux addons-012036 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [48a1e53b66a23e7a0573e41068f9c5090d8c75c664d2ab30d4d01cf1368f5624] <==
	I0416 16:21:12.127110       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.106.97.149"}
	I0416 16:21:12.214586       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.104.55.36"}
	I0416 16:21:12.326920       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0416 16:21:14.390852       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.51.58"}
	I0416 16:21:14.413738       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0416 16:21:14.790329       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.34.184"}
	I0416 16:21:16.753839       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.13.69"}
	E0416 16:21:39.522595       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.221.162:443: connect: connection refused
	W0416 16:21:39.526682       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 16:21:39.527044       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0416 16:21:39.533150       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.221.162:443: connect: connection refused
	E0416 16:21:39.537215       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.221.162:443: connect: connection refused
	E0416 16:21:39.548663       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.221.162:443: connect: connection refused
	I0416 16:21:39.644154       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0416 16:22:32.825488       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0416 16:22:33.930159       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0416 16:22:40.537424       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0416 16:22:46.032156       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.47.222"}
	I0416 16:22:46.611543       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0416 16:22:50.600220       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0416 16:22:50.869192       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.177.157"}
	I0416 16:23:03.597194       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.196.211"}
	E0416 16:23:05.485289       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	E0416 16:23:06.550837       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [87ef232e07b969d1694735212110e97ade6960347449a86c2ad23f48f519c049] <==
	I0416 16:22:46.335842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="26.055116ms"
	I0416 16:22:46.336159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="186.7µs"
	I0416 16:22:49.546591       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0416 16:22:49.546945       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0416 16:22:50.073396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5446596998" duration="8.764µs"
	I0416 16:22:50.651236       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0416 16:22:51.457965       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:22:51.458043       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0416 16:22:51.479994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="6.384µs"
	I0416 16:22:55.365527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="102.951µs"
	I0416 16:22:55.418298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="20.750065ms"
	I0416 16:22:55.418743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="138.746µs"
	I0416 16:23:01.487136       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0416 16:23:01.487218       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 16:23:01.886821       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0416 16:23:01.887251       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:23:03.350396       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0416 16:23:03.393050       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-9rvj4"
	I0416 16:23:03.416431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.998545ms"
	I0416 16:23:03.449953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="32.677936ms"
	I0416 16:23:03.454488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.413µs"
	I0416 16:23:05.200214       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0416 16:23:05.397596       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W0416 16:23:07.299220       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:23:07.299285       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [b656b7633700bf469cfbf1a15cde28b6e1a8cd5e1f762666e40a4eda00022a63] <==
	I0416 16:21:03.527818       1 server_others.go:72] "Using iptables proxy"
	I0416 16:21:03.545534       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.247"]
	I0416 16:21:03.826107       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:21:03.826154       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:21:03.826167       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:21:04.000980       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:21:04.001190       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:21:04.001202       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:21:04.011675       1 config.go:188] "Starting service config controller"
	I0416 16:21:04.011699       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:21:04.011721       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:21:04.011724       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:21:04.012203       1 config.go:315] "Starting node config controller"
	I0416 16:21:04.012210       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:21:04.113447       1 shared_informer.go:318] Caches are synced for node config
	I0416 16:21:04.113497       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:21:04.113575       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [24af4e069b22ff8e362e59eeacad22818e447bc78b5e86e5ede0b4994edf7fc7] <==
	W0416 16:20:46.142313       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:20:46.142320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:20:46.142506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 16:20:46.142549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 16:20:46.142600       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:20:46.142669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:20:46.975914       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:20:46.975973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:20:46.998830       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:20:46.998860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:20:47.073308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:47.073380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:47.162376       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:47.162453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:47.244816       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:20:47.244852       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:20:47.357677       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:20:47.357979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:20:47.474088       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:20:47.474152       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:20:47.481971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:47.482032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:47.489004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:20:47.489072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0416 16:20:50.207893       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 16:23:07 addons-012036 kubelet[1246]: E0416 16:23:07.306749    1246 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="node-driver-registrar"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: E0416 16:23:07.306757    1246 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="csi-snapshotter"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: E0416 16:23:07.306765    1246 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="csi-external-health-monitor-controller"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: E0416 16:23:07.306774    1246 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="liveness-probe"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306806    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="csi-external-health-monitor-controller"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306814    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="0445f263-dae8-46f5-a610-7bf97d2e8310" containerName="minikube-ingress-dns"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306820    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a4dcb7-fc8d-45d7-912a-052b70ffedea" containerName="csi-attacher"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306825    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="liveness-probe"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306833    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed11f0c4-aade-4f74-ae20-250260b20010" containerName="csi-resizer"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306840    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="hostpath"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306848    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="node-driver-registrar"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306855    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="csi-provisioner"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.306861    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" containerName="csi-snapshotter"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.466405    1246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzlv\" (UniqueName: \"kubernetes.io/projected/4ad3331b-c57e-449b-a159-9d2f3a9ecabf-kube-api-access-wkzlv\") pod \"helm-test\" (UID: \"4ad3331b-c57e-449b-a159-9d2f3a9ecabf\") " pod="kube-system/helm-test"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.538957    1246 scope.go:117] "RemoveContainer" containerID="b599890fb9c034735fb9f5964f815268eb07ef30ac8729517f00fa72d6109696"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.577951    1246 scope.go:117] "RemoveContainer" containerID="6dcb42bc8b7b8829634f03ba603a3768ca32d9af9abd13a9147ddb3658c72b8f"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.612385    1246 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.639945    1246 scope.go:117] "RemoveContainer" containerID="2a1e553761953c4abc10789770687355ba5d2a4b6d770e53e35ebf1b3aa0bb96"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.659525    1246 scope.go:117] "RemoveContainer" containerID="b6246028475c81eacba55f063c21da9c4c960dd83314f5fd9af2137d2835d32c"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.727837    1246 scope.go:117] "RemoveContainer" containerID="9ed676fde39246300ab97468d5587ac60c654caf1552c5e83d60f7b7cfe1aef7"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.783951    1246 scope.go:117] "RemoveContainer" containerID="704964b5972d3df0f8969e1a7e6b99625e92d3a7f3204a05b89853be082a5271"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.808912    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a4dcb7-fc8d-45d7-912a-052b70ffedea" path="/var/lib/kubelet/pods/60a4dcb7-fc8d-45d7-912a-052b70ffedea/volumes"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.809527    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" path="/var/lib/kubelet/pods/6942c4bf-39db-43ca-bf0e-52f91546c9da/volumes"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.810499    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed11f0c4-aade-4f74-ae20-250260b20010" path="/var/lib/kubelet/pods/ed11f0c4-aade-4f74-ae20-250260b20010/volumes"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.837801    1246 scope.go:117] "RemoveContainer" containerID="93b2144a44fd3d16f144ceb35f7e69404f5c020aed2b91f2a1934de6fefc1859"
	
	
	==> storage-provisioner [d754b9971ad2d2f5a7e70ad479abc97438d830807c7537054de9f14cdb834409] <==
	I0416 16:21:14.815389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 16:21:15.055503       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 16:21:15.055542       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 16:21:15.326900       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 16:21:15.370700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c9135d2-e05f-4353-8901-9f73315b8088", APIVersion:"v1", ResourceVersion:"786", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-012036_1b1199e4-ea3b-4fe6-b1e1-976f51e3b165 became leader
	I0416 16:21:15.370988       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-012036_1b1199e4-ea3b-4fe6-b1e1-976f51e3b165!
	I0416 16:21:15.572726       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-012036_1b1199e4-ea3b-4fe6-b1e1-976f51e3b165!
	E0416 16:22:50.489301       1 controller.go:1050] claim "8f41ec9b-ffc7-4a6a-90f0-74da7d87242a" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-012036 -n addons-012036
helpers_test.go:261: (dbg) Run:  kubectl --context addons-012036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zpdtd ingress-nginx-admission-patch-kscqv helm-test
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-012036 describe pod ingress-nginx-admission-create-zpdtd ingress-nginx-admission-patch-kscqv helm-test
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-012036 describe pod ingress-nginx-admission-create-zpdtd ingress-nginx-admission-patch-kscqv helm-test: exit status 1 (78.951623ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zpdtd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kscqv" not found
	Error from server (NotFound): pods "helm-test" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-012036 describe pod ingress-nginx-admission-create-zpdtd ingress-nginx-admission-patch-kscqv helm-test: exit status 1
--- FAIL: TestAddons/parallel/Ingress (19.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.076316ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-012036 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-012036 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c8f36c62-d615-4217-a21f-e7e958ecd5a1] Pending
helpers_test.go:344: "task-pv-pod" [c8f36c62-d615-4217-a21f-e7e958ecd5a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c8f36c62-d615-4217-a21f-e7e958ecd5a1] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.006330116s
addons_test.go:584: (dbg) Run:  kubectl --context addons-012036 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-012036 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-012036 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-012036 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-012036 delete pod task-pv-pod: (1.507850199s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-012036 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-012036 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-012036 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [22f9ee11-1273-422d-bad0-23bd90adbf66] Pending
helpers_test.go:344: "task-pv-pod-restore" [22f9ee11-1273-422d-bad0-23bd90adbf66] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [22f9ee11-1273-422d-bad0-23bd90adbf66] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.006735317s
addons_test.go:626: (dbg) Run:  kubectl --context addons-012036 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-012036 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-012036 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-012036 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.425960893s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-012036 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (295.756853ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:23:11.444579   14225 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:23:11.444729   14225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:23:11.444741   14225 out.go:304] Setting ErrFile to fd 2...
	I0416 16:23:11.444746   14225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:23:11.444935   14225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:23:11.445197   14225 mustload.go:65] Loading cluster: addons-012036
	I0416 16:23:11.445544   14225 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:23:11.445564   14225 addons.go:597] checking whether the cluster is paused
	I0416 16:23:11.445646   14225 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:23:11.445657   14225 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:23:11.446018   14225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:23:11.446078   14225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:23:11.461968   14225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0416 16:23:11.462458   14225 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:23:11.463111   14225 main.go:141] libmachine: Using API Version  1
	I0416 16:23:11.463166   14225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:23:11.463482   14225 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:23:11.463684   14225 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:23:11.465308   14225 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:23:11.465529   14225 ssh_runner.go:195] Run: systemctl --version
	I0416 16:23:11.465552   14225 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:23:11.467835   14225 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:23:11.468229   14225 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:23:11.468282   14225 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:23:11.468463   14225 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:23:11.468671   14225 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:23:11.468838   14225 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:23:11.469038   14225 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:23:11.551204   14225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0416 16:23:11.551271   14225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:23:11.599827   14225 cri.go:89] found id: "4ce8234c78e56e739dd4a1c4dd38418eb2a57ffa8ecd9c21e0a9e8766c979468"
	I0416 16:23:11.599858   14225 cri.go:89] found id: "86f1572e10b06d09eea995808a7412c1995eac8c1fc68f274f54c170123178eb"
	I0416 16:23:11.599864   14225 cri.go:89] found id: "3c4ada40b02b1fe4a5c02b82266fcc11273ba93d63b18e6ff891e6880fb25a33"
	I0416 16:23:11.599869   14225 cri.go:89] found id: "c25c9e32964c81cc36e6803199651d670631bedf057463aa4942a120f235791c"
	I0416 16:23:11.599872   14225 cri.go:89] found id: "0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593"
	I0416 16:23:11.599882   14225 cri.go:89] found id: "d754b9971ad2d2f5a7e70ad479abc97438d830807c7537054de9f14cdb834409"
	I0416 16:23:11.599886   14225 cri.go:89] found id: "f7179288f854b31cc4cbdd569bfcd28c058e519f2bf3526e9928a17684729742"
	I0416 16:23:11.599889   14225 cri.go:89] found id: "b656b7633700bf469cfbf1a15cde28b6e1a8cd5e1f762666e40a4eda00022a63"
	I0416 16:23:11.599893   14225 cri.go:89] found id: "24af4e069b22ff8e362e59eeacad22818e447bc78b5e86e5ede0b4994edf7fc7"
	I0416 16:23:11.599906   14225 cri.go:89] found id: "48a1e53b66a23e7a0573e41068f9c5090d8c75c664d2ab30d4d01cf1368f5624"
	I0416 16:23:11.599913   14225 cri.go:89] found id: "085bd521d80e689ee6adf7cb8b640371281a985e7349716003c1f7dc08415dac"
	I0416 16:23:11.599917   14225 cri.go:89] found id: "87ef232e07b969d1694735212110e97ade6960347449a86c2ad23f48f519c049"
	I0416 16:23:11.599921   14225 cri.go:89] found id: ""
	I0416 16:23:11.599993   14225 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0416 16:23:11.678128   14225 main.go:141] libmachine: Making call to close driver server
	I0416 16:23:11.678152   14225 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:23:11.678520   14225 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:23:11.678541   14225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:23:11.680692   14225 out.go:177] 
	W0416 16:23:11.682141   14225 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-16T16:23:11Z" level=error msg="stat /run/containerd/runc/k8s.io/a06a639a22bde5f2e2f2a85badce3dfa03843cf8f8e158fbe635f4ceb195e3c1: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-16T16:23:11Z" level=error msg="stat /run/containerd/runc/k8s.io/a06a639a22bde5f2e2f2a85badce3dfa03843cf8f8e158fbe635f4ceb195e3c1: no such file or directory"
	
	W0416 16:23:11.682157   14225 out.go:239] * 
	* 
	W0416 16:23:11.684305   14225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 16:23:11.685846   14225 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:644: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-012036 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-012036 -n addons-012036
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-012036 logs -n 25: (1.930178636s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-220331                                                                     | download-only-220331 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-253269                                                                     | download-only-253269 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-310063                                                                     | download-only-310063 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-220331                                                                     | download-only-220331 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-437913 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | binary-mirror-437913                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:33293                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-437913                                                                     | binary-mirror-437913 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| addons  | enable dashboard -p                                                                         | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | addons-012036                                                                               |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | addons-012036                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-012036 --wait=true                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:22 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | addons-012036                                                                               |                      |         |                |                     |                     |
	| addons  | addons-012036 addons                                                                        | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-012036 ip                                                                            | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | -p addons-012036                                                                            |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | -p addons-012036                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | addons-012036                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-012036 ssh cat                                                                       | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | /opt/local-path-provisioner/pvc-8f41ec9b-ffc7-4a6a-90f0-74da7d87242a_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ssh     | addons-012036 ssh curl -s                                                                   | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC | 16 Apr 24 16:23 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| ip      | addons-012036 ip                                                                            | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC | 16 Apr 24 16:23 UTC |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC | 16 Apr 24 16:23 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-012036 addons                                                                        | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC | 16 Apr 24 16:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC |                     |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	| addons  | addons-012036 addons                                                                        | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC |                     |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-012036 addons disable                                                                | addons-012036        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:23 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:59
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:59.527245   11739 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:59.527526   11739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:59.527537   11739 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:59.527542   11739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:59.527741   11739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:19:59.528422   11739 out.go:298] Setting JSON to false
	I0416 16:19:59.529230   11739 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":150,"bootTime":1713284250,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:59.529300   11739 start.go:139] virtualization: kvm guest
	I0416 16:19:59.531725   11739 out.go:177] * [addons-012036] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:19:59.533232   11739 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:19:59.533278   11739 notify.go:220] Checking for updates...
	I0416 16:19:59.536051   11739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:59.537531   11739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:19:59.538814   11739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:19:59.540095   11739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:19:59.541412   11739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:19:59.542804   11739 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:19:59.577853   11739 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 16:19:59.579362   11739 start.go:297] selected driver: kvm2
	I0416 16:19:59.579378   11739 start.go:901] validating driver "kvm2" against <nil>
	I0416 16:19:59.579394   11739 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:19:59.580090   11739 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:59.580188   11739 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3613/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:19:59.596402   11739 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:19:59.596482   11739 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:19:59.596725   11739 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:19:59.596790   11739 cni.go:84] Creating CNI manager for ""
	I0416 16:19:59.596808   11739 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0416 16:19:59.596828   11739 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 16:19:59.596894   11739 start.go:340] cluster config:
	{Name:addons-012036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:19:59.597012   11739 iso.go:125] acquiring lock: {Name:mk70afca65b055481b04a6db2c93574dfae6043a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:59.598884   11739 out.go:177] * Starting "addons-012036" primary control-plane node in "addons-012036" cluster
	I0416 16:19:59.600486   11739 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0416 16:19:59.600531   11739 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3613/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0416 16:19:59.600544   11739 cache.go:56] Caching tarball of preloaded images
	I0416 16:19:59.600631   11739 preload.go:173] Found /home/jenkins/minikube-integration/18649-3613/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:19:59.600643   11739 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0416 16:19:59.600958   11739 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/config.json ...
	I0416 16:19:59.600989   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/config.json: {Name:mk66815558bebc3bd2f023ca5dabf70847044b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:19:59.601152   11739 start.go:360] acquireMachinesLock for addons-012036: {Name:mk2d52a4d04829b055d900e30b1db98f01926bd9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:19:59.601221   11739 start.go:364] duration metric: took 50.948µs to acquireMachinesLock for "addons-012036"
	I0416 16:19:59.601252   11739 start.go:93] Provisioning new machine with config: &{Name:addons-012036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0416 16:19:59.601336   11739 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 16:19:59.603292   11739 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0416 16:19:59.603439   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:19:59.603485   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:19:59.618271   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0416 16:19:59.618682   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:19:59.619245   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:19:59.619272   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:19:59.619700   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:19:59.619899   11739 main.go:141] libmachine: (addons-012036) Calling .GetMachineName
	I0416 16:19:59.620048   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:19:59.620195   11739 start.go:159] libmachine.API.Create for "addons-012036" (driver="kvm2")
	I0416 16:19:59.620227   11739 client.go:168] LocalClient.Create starting
	I0416 16:19:59.620282   11739 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem
	I0416 16:19:59.746334   11739 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/cert.pem
	I0416 16:19:59.880939   11739 main.go:141] libmachine: Running pre-create checks...
	I0416 16:19:59.880967   11739 main.go:141] libmachine: (addons-012036) Calling .PreCreateCheck
	I0416 16:19:59.881527   11739 main.go:141] libmachine: (addons-012036) Calling .GetConfigRaw
	I0416 16:19:59.882013   11739 main.go:141] libmachine: Creating machine...
	I0416 16:19:59.882030   11739 main.go:141] libmachine: (addons-012036) Calling .Create
	I0416 16:19:59.882209   11739 main.go:141] libmachine: (addons-012036) Creating KVM machine...
	I0416 16:19:59.883506   11739 main.go:141] libmachine: (addons-012036) DBG | found existing default KVM network
	I0416 16:19:59.884383   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:19:59.884210   11761 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0416 16:19:59.884429   11739 main.go:141] libmachine: (addons-012036) DBG | created network xml: 
	I0416 16:19:59.884460   11739 main.go:141] libmachine: (addons-012036) DBG | <network>
	I0416 16:19:59.884475   11739 main.go:141] libmachine: (addons-012036) DBG |   <name>mk-addons-012036</name>
	I0416 16:19:59.884486   11739 main.go:141] libmachine: (addons-012036) DBG |   <dns enable='no'/>
	I0416 16:19:59.884492   11739 main.go:141] libmachine: (addons-012036) DBG |   
	I0416 16:19:59.884499   11739 main.go:141] libmachine: (addons-012036) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0416 16:19:59.884507   11739 main.go:141] libmachine: (addons-012036) DBG |     <dhcp>
	I0416 16:19:59.884516   11739 main.go:141] libmachine: (addons-012036) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0416 16:19:59.884528   11739 main.go:141] libmachine: (addons-012036) DBG |     </dhcp>
	I0416 16:19:59.884539   11739 main.go:141] libmachine: (addons-012036) DBG |   </ip>
	I0416 16:19:59.884552   11739 main.go:141] libmachine: (addons-012036) DBG |   
	I0416 16:19:59.884561   11739 main.go:141] libmachine: (addons-012036) DBG | </network>
	I0416 16:19:59.884568   11739 main.go:141] libmachine: (addons-012036) DBG | 
	I0416 16:19:59.890144   11739 main.go:141] libmachine: (addons-012036) DBG | trying to create private KVM network mk-addons-012036 192.168.39.0/24...
	I0416 16:19:59.963308   11739 main.go:141] libmachine: (addons-012036) DBG | private KVM network mk-addons-012036 192.168.39.0/24 created
	I0416 16:19:59.963427   11739 main.go:141] libmachine: (addons-012036) Setting up store path in /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036 ...
	I0416 16:19:59.963450   11739 main.go:141] libmachine: (addons-012036) Building disk image from file:///home/jenkins/minikube-integration/18649-3613/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:19:59.963472   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:19:59.963394   11761 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:19:59.963640   11739 main.go:141] libmachine: (addons-012036) Downloading /home/jenkins/minikube-integration/18649-3613/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3613/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:20:00.200845   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:00.200702   11761 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa...
	I0416 16:20:00.389455   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:00.389312   11761 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/addons-012036.rawdisk...
	I0416 16:20:00.389478   11739 main.go:141] libmachine: (addons-012036) DBG | Writing magic tar header
	I0416 16:20:00.389488   11739 main.go:141] libmachine: (addons-012036) DBG | Writing SSH key tar header
	I0416 16:20:00.389498   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:00.389436   11761 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036 ...
	I0416 16:20:00.389561   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036
	I0416 16:20:00.389584   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3613/.minikube/machines
	I0416 16:20:00.389594   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:20:00.389641   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036 (perms=drwx------)
	I0416 16:20:00.389667   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration/18649-3613/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:20:00.389683   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3613
	I0416 16:20:00.389725   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration/18649-3613/.minikube (perms=drwxr-xr-x)
	I0416 16:20:00.389744   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:20:00.389755   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:20:00.389764   11739 main.go:141] libmachine: (addons-012036) DBG | Checking permissions on dir: /home
	I0416 16:20:00.389775   11739 main.go:141] libmachine: (addons-012036) DBG | Skipping /home - not owner
	I0416 16:20:00.389817   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration/18649-3613 (perms=drwxrwxr-x)
	I0416 16:20:00.389839   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:20:00.389847   11739 main.go:141] libmachine: (addons-012036) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:20:00.389855   11739 main.go:141] libmachine: (addons-012036) Creating domain...
	I0416 16:20:00.390879   11739 main.go:141] libmachine: (addons-012036) define libvirt domain using xml: 
	I0416 16:20:00.390915   11739 main.go:141] libmachine: (addons-012036) <domain type='kvm'>
	I0416 16:20:00.390926   11739 main.go:141] libmachine: (addons-012036)   <name>addons-012036</name>
	I0416 16:20:00.390935   11739 main.go:141] libmachine: (addons-012036)   <memory unit='MiB'>4000</memory>
	I0416 16:20:00.390949   11739 main.go:141] libmachine: (addons-012036)   <vcpu>2</vcpu>
	I0416 16:20:00.390959   11739 main.go:141] libmachine: (addons-012036)   <features>
	I0416 16:20:00.390968   11739 main.go:141] libmachine: (addons-012036)     <acpi/>
	I0416 16:20:00.390983   11739 main.go:141] libmachine: (addons-012036)     <apic/>
	I0416 16:20:00.390995   11739 main.go:141] libmachine: (addons-012036)     <pae/>
	I0416 16:20:00.391002   11739 main.go:141] libmachine: (addons-012036)     
	I0416 16:20:00.391012   11739 main.go:141] libmachine: (addons-012036)   </features>
	I0416 16:20:00.391017   11739 main.go:141] libmachine: (addons-012036)   <cpu mode='host-passthrough'>
	I0416 16:20:00.391022   11739 main.go:141] libmachine: (addons-012036)   
	I0416 16:20:00.391032   11739 main.go:141] libmachine: (addons-012036)   </cpu>
	I0416 16:20:00.391040   11739 main.go:141] libmachine: (addons-012036)   <os>
	I0416 16:20:00.391045   11739 main.go:141] libmachine: (addons-012036)     <type>hvm</type>
	I0416 16:20:00.391053   11739 main.go:141] libmachine: (addons-012036)     <boot dev='cdrom'/>
	I0416 16:20:00.391058   11739 main.go:141] libmachine: (addons-012036)     <boot dev='hd'/>
	I0416 16:20:00.391067   11739 main.go:141] libmachine: (addons-012036)     <bootmenu enable='no'/>
	I0416 16:20:00.391072   11739 main.go:141] libmachine: (addons-012036)   </os>
	I0416 16:20:00.391080   11739 main.go:141] libmachine: (addons-012036)   <devices>
	I0416 16:20:00.391090   11739 main.go:141] libmachine: (addons-012036)     <disk type='file' device='cdrom'>
	I0416 16:20:00.391100   11739 main.go:141] libmachine: (addons-012036)       <source file='/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/boot2docker.iso'/>
	I0416 16:20:00.391112   11739 main.go:141] libmachine: (addons-012036)       <target dev='hdc' bus='scsi'/>
	I0416 16:20:00.391121   11739 main.go:141] libmachine: (addons-012036)       <readonly/>
	I0416 16:20:00.391142   11739 main.go:141] libmachine: (addons-012036)     </disk>
	I0416 16:20:00.391155   11739 main.go:141] libmachine: (addons-012036)     <disk type='file' device='disk'>
	I0416 16:20:00.391164   11739 main.go:141] libmachine: (addons-012036)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:20:00.391180   11739 main.go:141] libmachine: (addons-012036)       <source file='/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/addons-012036.rawdisk'/>
	I0416 16:20:00.391192   11739 main.go:141] libmachine: (addons-012036)       <target dev='hda' bus='virtio'/>
	I0416 16:20:00.391204   11739 main.go:141] libmachine: (addons-012036)     </disk>
	I0416 16:20:00.391214   11739 main.go:141] libmachine: (addons-012036)     <interface type='network'>
	I0416 16:20:00.391226   11739 main.go:141] libmachine: (addons-012036)       <source network='mk-addons-012036'/>
	I0416 16:20:00.391234   11739 main.go:141] libmachine: (addons-012036)       <model type='virtio'/>
	I0416 16:20:00.391243   11739 main.go:141] libmachine: (addons-012036)     </interface>
	I0416 16:20:00.391248   11739 main.go:141] libmachine: (addons-012036)     <interface type='network'>
	I0416 16:20:00.391255   11739 main.go:141] libmachine: (addons-012036)       <source network='default'/>
	I0416 16:20:00.391260   11739 main.go:141] libmachine: (addons-012036)       <model type='virtio'/>
	I0416 16:20:00.391268   11739 main.go:141] libmachine: (addons-012036)     </interface>
	I0416 16:20:00.391272   11739 main.go:141] libmachine: (addons-012036)     <serial type='pty'>
	I0416 16:20:00.391280   11739 main.go:141] libmachine: (addons-012036)       <target port='0'/>
	I0416 16:20:00.391287   11739 main.go:141] libmachine: (addons-012036)     </serial>
	I0416 16:20:00.391293   11739 main.go:141] libmachine: (addons-012036)     <console type='pty'>
	I0416 16:20:00.391300   11739 main.go:141] libmachine: (addons-012036)       <target type='serial' port='0'/>
	I0416 16:20:00.391332   11739 main.go:141] libmachine: (addons-012036)     </console>
	I0416 16:20:00.391358   11739 main.go:141] libmachine: (addons-012036)     <rng model='virtio'>
	I0416 16:20:00.391374   11739 main.go:141] libmachine: (addons-012036)       <backend model='random'>/dev/random</backend>
	I0416 16:20:00.391385   11739 main.go:141] libmachine: (addons-012036)     </rng>
	I0416 16:20:00.391396   11739 main.go:141] libmachine: (addons-012036)     
	I0416 16:20:00.391406   11739 main.go:141] libmachine: (addons-012036)     
	I0416 16:20:00.391417   11739 main.go:141] libmachine: (addons-012036)   </devices>
	I0416 16:20:00.391429   11739 main.go:141] libmachine: (addons-012036) </domain>
	I0416 16:20:00.391450   11739 main.go:141] libmachine: (addons-012036) 
	I0416 16:20:00.397797   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:81:9e:a9 in network default
	I0416 16:20:00.398420   11739 main.go:141] libmachine: (addons-012036) Ensuring networks are active...
	I0416 16:20:00.398445   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:00.399114   11739 main.go:141] libmachine: (addons-012036) Ensuring network default is active
	I0416 16:20:00.399434   11739 main.go:141] libmachine: (addons-012036) Ensuring network mk-addons-012036 is active
	I0416 16:20:00.399959   11739 main.go:141] libmachine: (addons-012036) Getting domain xml...
	I0416 16:20:00.400693   11739 main.go:141] libmachine: (addons-012036) Creating domain...
	I0416 16:20:01.847412   11739 main.go:141] libmachine: (addons-012036) Waiting to get IP...
	I0416 16:20:01.848294   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:01.848677   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:01.848720   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:01.848670   11761 retry.go:31] will retry after 255.945162ms: waiting for machine to come up
	I0416 16:20:02.106284   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:02.106746   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:02.106774   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:02.106706   11761 retry.go:31] will retry after 366.834761ms: waiting for machine to come up
	I0416 16:20:02.475444   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:02.475859   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:02.475880   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:02.475838   11761 retry.go:31] will retry after 386.130051ms: waiting for machine to come up
	I0416 16:20:02.863399   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:02.863861   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:02.863888   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:02.863802   11761 retry.go:31] will retry after 584.84142ms: waiting for machine to come up
	I0416 16:20:03.450767   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:03.451243   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:03.451268   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:03.451202   11761 retry.go:31] will retry after 716.748039ms: waiting for machine to come up
	I0416 16:20:04.169306   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:04.169703   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:04.169748   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:04.169668   11761 retry.go:31] will retry after 844.438849ms: waiting for machine to come up
	I0416 16:20:05.015229   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:05.015609   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:05.015631   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:05.015569   11761 retry.go:31] will retry after 723.980814ms: waiting for machine to come up
	I0416 16:20:05.741666   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:05.741988   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:05.742010   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:05.741960   11761 retry.go:31] will retry after 1.348041583s: waiting for machine to come up
	I0416 16:20:07.092468   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:07.092906   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:07.092923   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:07.092861   11761 retry.go:31] will retry after 1.612633285s: waiting for machine to come up
	I0416 16:20:08.707805   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:08.708256   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:08.708286   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:08.708210   11761 retry.go:31] will retry after 2.090027603s: waiting for machine to come up
	I0416 16:20:10.799583   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:10.800038   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:10.800062   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:10.800000   11761 retry.go:31] will retry after 2.137796384s: waiting for machine to come up
	I0416 16:20:12.938896   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:12.939255   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:12.939290   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:12.939219   11761 retry.go:31] will retry after 3.492845465s: waiting for machine to come up
	I0416 16:20:16.434224   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:16.434793   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:16.434816   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:16.434731   11761 retry.go:31] will retry after 4.261651129s: waiting for machine to come up
	I0416 16:20:20.697906   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:20.698385   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find current IP address of domain addons-012036 in network mk-addons-012036
	I0416 16:20:20.698423   11739 main.go:141] libmachine: (addons-012036) DBG | I0416 16:20:20.698361   11761 retry.go:31] will retry after 3.86830584s: waiting for machine to come up
	I0416 16:20:24.571593   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:24.572110   11739 main.go:141] libmachine: (addons-012036) Found IP for machine: 192.168.39.247
	I0416 16:20:24.572133   11739 main.go:141] libmachine: (addons-012036) Reserving static IP address...
	I0416 16:20:24.572150   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has current primary IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:24.572489   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find host DHCP lease matching {name: "addons-012036", mac: "52:54:00:dd:cf:c9", ip: "192.168.39.247"} in network mk-addons-012036
	I0416 16:20:24.652648   11739 main.go:141] libmachine: (addons-012036) DBG | Getting to WaitForSSH function...
	I0416 16:20:24.652688   11739 main.go:141] libmachine: (addons-012036) Reserved static IP address: 192.168.39.247
	I0416 16:20:24.652700   11739 main.go:141] libmachine: (addons-012036) Waiting for SSH to be available...
	I0416 16:20:24.655288   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:24.655611   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036
	I0416 16:20:24.655633   11739 main.go:141] libmachine: (addons-012036) DBG | unable to find defined IP address of network mk-addons-012036 interface with MAC address 52:54:00:dd:cf:c9
	I0416 16:20:24.655860   11739 main.go:141] libmachine: (addons-012036) DBG | Using SSH client type: external
	I0416 16:20:24.655883   11739 main.go:141] libmachine: (addons-012036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa (-rw-------)
	I0416 16:20:24.655905   11739 main.go:141] libmachine: (addons-012036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:20:24.655917   11739 main.go:141] libmachine: (addons-012036) DBG | About to run SSH command:
	I0416 16:20:24.655927   11739 main.go:141] libmachine: (addons-012036) DBG | exit 0
	I0416 16:20:24.667733   11739 main.go:141] libmachine: (addons-012036) DBG | SSH cmd err, output: exit status 255: 
	I0416 16:20:24.667760   11739 main.go:141] libmachine: (addons-012036) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0416 16:20:24.667771   11739 main.go:141] libmachine: (addons-012036) DBG | command : exit 0
	I0416 16:20:24.667779   11739 main.go:141] libmachine: (addons-012036) DBG | err     : exit status 255
	I0416 16:20:24.667790   11739 main.go:141] libmachine: (addons-012036) DBG | output  : 
	I0416 16:20:27.668005   11739 main.go:141] libmachine: (addons-012036) DBG | Getting to WaitForSSH function...
	I0416 16:20:27.670351   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.670717   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:27.670755   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.670884   11739 main.go:141] libmachine: (addons-012036) DBG | Using SSH client type: external
	I0416 16:20:27.670909   11739 main.go:141] libmachine: (addons-012036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa (-rw-------)
	I0416 16:20:27.670932   11739 main.go:141] libmachine: (addons-012036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:20:27.670949   11739 main.go:141] libmachine: (addons-012036) DBG | About to run SSH command:
	I0416 16:20:27.670964   11739 main.go:141] libmachine: (addons-012036) DBG | exit 0
	I0416 16:20:27.796051   11739 main.go:141] libmachine: (addons-012036) DBG | SSH cmd err, output: <nil>: 
	I0416 16:20:27.796440   11739 main.go:141] libmachine: (addons-012036) KVM machine creation complete!
	I0416 16:20:27.796681   11739 main.go:141] libmachine: (addons-012036) Calling .GetConfigRaw
	I0416 16:20:27.797276   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:27.797482   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:27.797625   11739 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:20:27.797641   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:20:27.798941   11739 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:20:27.798960   11739 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:20:27.798968   11739 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:20:27.798974   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:27.801332   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.801653   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:27.801684   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.801790   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:27.801998   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:27.802155   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:27.802298   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:27.802479   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:27.802642   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:27.802653   11739 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:20:27.906954   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:20:27.906975   11739 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:20:27.906983   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:27.909604   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.909994   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:27.910027   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:27.910142   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:27.910350   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:27.910512   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:27.910621   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:27.910817   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:27.910998   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:27.911012   11739 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:20:28.016880   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:20:28.016975   11739 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:20:28.016990   11739 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:20:28.017003   11739 main.go:141] libmachine: (addons-012036) Calling .GetMachineName
	I0416 16:20:28.017269   11739 buildroot.go:166] provisioning hostname "addons-012036"
	I0416 16:20:28.017309   11739 main.go:141] libmachine: (addons-012036) Calling .GetMachineName
	I0416 16:20:28.017545   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.020128   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.020472   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.020524   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.020733   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.020909   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.021065   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.021215   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.021381   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:28.021554   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:28.021567   11739 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-012036 && echo "addons-012036" | sudo tee /etc/hostname
	I0416 16:20:28.141999   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-012036
	
	I0416 16:20:28.142028   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.144672   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.144992   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.145019   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.145218   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.145439   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.145631   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.145788   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.145968   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:28.146137   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:28.146153   11739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-012036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-012036/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-012036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:20:28.262924   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:20:28.262961   11739 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3613/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3613/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3613/.minikube}
	I0416 16:20:28.262989   11739 buildroot.go:174] setting up certificates
	I0416 16:20:28.263000   11739 provision.go:84] configureAuth start
	I0416 16:20:28.263013   11739 main.go:141] libmachine: (addons-012036) Calling .GetMachineName
	I0416 16:20:28.263343   11739 main.go:141] libmachine: (addons-012036) Calling .GetIP
	I0416 16:20:28.265966   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.266290   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.266312   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.266482   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.268651   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.269020   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.269040   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.269172   11739 provision.go:143] copyHostCerts
	I0416 16:20:28.269254   11739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3613/.minikube/ca.pem (1078 bytes)
	I0416 16:20:28.269414   11739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3613/.minikube/cert.pem (1123 bytes)
	I0416 16:20:28.269513   11739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3613/.minikube/key.pem (1675 bytes)
	I0416 16:20:28.269598   11739 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3613/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca-key.pem org=jenkins.addons-012036 san=[127.0.0.1 192.168.39.247 addons-012036 localhost minikube]
	I0416 16:20:28.404570   11739 provision.go:177] copyRemoteCerts
	I0416 16:20:28.404627   11739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:20:28.404653   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.407562   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.407893   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.407924   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.408099   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.408337   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.408478   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.408654   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:20:28.499351   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:20:28.532783   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:20:28.577530   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:20:28.607574   11739 provision.go:87] duration metric: took 344.562659ms to configureAuth
	I0416 16:20:28.607610   11739 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:20:28.607841   11739 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:20:28.607871   11739 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:20:28.607883   11739 main.go:141] libmachine: (addons-012036) Calling .GetURL
	I0416 16:20:28.609116   11739 main.go:141] libmachine: (addons-012036) DBG | Using libvirt version 6000000
	I0416 16:20:28.611263   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.611676   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.611695   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.611946   11739 main.go:141] libmachine: Docker is up and running!
	I0416 16:20:28.611965   11739 main.go:141] libmachine: Reticulating splines...
	I0416 16:20:28.611974   11739 client.go:171] duration metric: took 28.991735116s to LocalClient.Create
	I0416 16:20:28.611999   11739 start.go:167] duration metric: took 28.991802959s to libmachine.API.Create "addons-012036"
	I0416 16:20:28.612011   11739 start.go:293] postStartSetup for "addons-012036" (driver="kvm2")
	I0416 16:20:28.612025   11739 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:20:28.612062   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.612310   11739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:20:28.612333   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.614770   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.615233   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.615261   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.615443   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.615671   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.615854   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.615998   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:20:28.699360   11739 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:20:28.705192   11739 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:20:28.705232   11739 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3613/.minikube/addons for local assets ...
	I0416 16:20:28.705296   11739 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3613/.minikube/files for local assets ...
	I0416 16:20:28.705319   11739 start.go:296] duration metric: took 93.301134ms for postStartSetup
	I0416 16:20:28.705350   11739 main.go:141] libmachine: (addons-012036) Calling .GetConfigRaw
	I0416 16:20:28.743430   11739 main.go:141] libmachine: (addons-012036) Calling .GetIP
	I0416 16:20:28.746342   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.746748   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.746805   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.747082   11739 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/config.json ...
	I0416 16:20:28.808163   11739 start.go:128] duration metric: took 29.206809207s to createHost
	I0416 16:20:28.808226   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.811324   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.811724   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.811762   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.812067   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.812305   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.812504   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.812673   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.812847   11739 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:28.813017   11739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0416 16:20:28.813029   11739 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:20:28.921255   11739 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713284428.908939594
	
	I0416 16:20:28.921285   11739 fix.go:216] guest clock: 1713284428.908939594
	I0416 16:20:28.921295   11739 fix.go:229] Guest: 2024-04-16 16:20:28.908939594 +0000 UTC Remote: 2024-04-16 16:20:28.80818957 +0000 UTC m=+29.328031426 (delta=100.750024ms)
	I0416 16:20:28.921333   11739 fix.go:200] guest clock delta is within tolerance: 100.750024ms
	I0416 16:20:28.921342   11739 start.go:83] releasing machines lock for "addons-012036", held for 29.320107375s
	I0416 16:20:28.921377   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.921687   11739 main.go:141] libmachine: (addons-012036) Calling .GetIP
	I0416 16:20:28.924400   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.924761   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.924793   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.924934   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.925582   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.925788   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:20:28.925904   11739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:20:28.925945   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.926016   11739 ssh_runner.go:195] Run: cat /version.json
	I0416 16:20:28.926040   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:20:28.928769   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.929052   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.929086   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.929107   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.929300   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.929488   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.929543   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:28.929570   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:28.929694   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.929733   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:20:28.929875   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:20:28.929879   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:20:28.930032   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:20:28.930184   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:20:29.009394   11739 ssh_runner.go:195] Run: systemctl --version
	I0416 16:20:29.040153   11739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:20:29.047305   11739 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:20:29.047387   11739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:20:29.067100   11739 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:20:29.067133   11739 start.go:494] detecting cgroup driver to use...
	I0416 16:20:29.067241   11739 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:20:29.311439   11739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:20:29.326715   11739 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:20:29.326788   11739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:20:29.342653   11739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:20:29.358843   11739 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:20:29.489765   11739 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:20:29.658463   11739 docker.go:233] disabling docker service ...
	I0416 16:20:29.658529   11739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:20:29.676244   11739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:20:29.692845   11739 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:20:29.820900   11739 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:20:29.967437   11739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:20:29.983812   11739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:20:30.006954   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:20:30.020149   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:20:30.033236   11739 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:20:30.033303   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:20:30.046262   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:20:30.059317   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:20:30.072189   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:20:30.085125   11739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:20:30.099112   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:20:30.112098   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:20:30.124845   11739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:20:30.138222   11739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:20:30.149785   11739 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:20:30.149847   11739 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:20:30.165569   11739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:20:30.177951   11739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:30.326028   11739 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:20:30.360985   11739 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0416 16:20:30.361081   11739 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0416 16:20:30.366492   11739 retry.go:31] will retry after 646.519722ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0416 16:20:31.013371   11739 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0416 16:20:31.019724   11739 start.go:562] Will wait 60s for crictl version
	I0416 16:20:31.019805   11739 ssh_runner.go:195] Run: which crictl
	I0416 16:20:31.024787   11739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:20:31.062124   11739 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0416 16:20:31.062244   11739 ssh_runner.go:195] Run: containerd --version
	I0416 16:20:31.092252   11739 ssh_runner.go:195] Run: containerd --version
	I0416 16:20:31.127029   11739 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.7.15 ...
	I0416 16:20:31.128692   11739 main.go:141] libmachine: (addons-012036) Calling .GetIP
	I0416 16:20:31.131466   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:31.131752   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:20:31.131792   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:20:31.132079   11739 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:20:31.137162   11739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:20:31.152111   11739 kubeadm.go:877] updating cluster {Name:addons-012036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:20:31.152209   11739 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0416 16:20:31.152279   11739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:20:31.190277   11739 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 16:20:31.190343   11739 ssh_runner.go:195] Run: which lz4
	I0416 16:20:31.195339   11739 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:20:31.200495   11739 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:20:31.200538   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (402346652 bytes)
	I0416 16:20:32.892543   11739 containerd.go:563] duration metric: took 1.697230091s to copy over tarball
	I0416 16:20:32.892626   11739 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:20:35.763378   11739 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.870714673s)
	I0416 16:20:35.763412   11739 containerd.go:570] duration metric: took 2.870838698s to extract the tarball
	I0416 16:20:35.763419   11739 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:20:35.805896   11739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:35.941248   11739 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:20:35.968495   11739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:20:36.021519   11739 retry.go:31] will retry after 312.428405ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-16T16:20:36Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0416 16:20:36.335218   11739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:20:36.380284   11739 containerd.go:627] all images are preloaded for containerd runtime.
	I0416 16:20:36.380308   11739 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:20:36.380315   11739 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.29.3 containerd true true} ...
	I0416 16:20:36.380419   11739 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-012036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:20:36.380470   11739 ssh_runner.go:195] Run: sudo crictl info
	I0416 16:20:36.420828   11739 cni.go:84] Creating CNI manager for ""
	I0416 16:20:36.420856   11739 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0416 16:20:36.420866   11739 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:20:36.420885   11739 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-012036 NodeName:addons-012036 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:20:36.420997   11739 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-012036"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:20:36.421054   11739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:20:36.433902   11739 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:20:36.433965   11739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 16:20:36.446698   11739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0416 16:20:36.467910   11739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:20:36.489267   11739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0416 16:20:36.512723   11739 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0416 16:20:36.517599   11739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:20:36.533345   11739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:36.667064   11739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:20:36.692948   11739 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036 for IP: 192.168.39.247
	I0416 16:20:36.692973   11739 certs.go:194] generating shared ca certs ...
	I0416 16:20:36.693008   11739 certs.go:226] acquiring lock for ca certs: {Name:mk9ced23d0481cc75aea9804ec6a597cc9021aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:36.693149   11739 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3613/.minikube/ca.key
	I0416 16:20:36.747986   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt ...
	I0416 16:20:36.748026   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt: {Name:mkac58f778aaf55d4b88bed00622c014e0c9b3b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:36.748227   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/ca.key ...
	I0416 16:20:36.748243   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/ca.key: {Name:mk0dde4dace016394ebca3966c4697c488b041ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:36.748361   11739 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.key
	I0416 16:20:37.086132   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.crt ...
	I0416 16:20:37.086161   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.crt: {Name:mk525e75f6f10a02af5bebafaf0f8ccd3eb9b5df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.086325   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.key ...
	I0416 16:20:37.086337   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.key: {Name:mk568b12fa31440e2141c5fc8fb8f5ca63d07af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.086400   11739 certs.go:256] generating profile certs ...
	I0416 16:20:37.086449   11739 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.key
	I0416 16:20:37.086469   11739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt with IP's: []
	I0416 16:20:37.227588   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt ...
	I0416 16:20:37.227622   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: {Name:mk53eab5e711f42ef1130930a40f74027d4f6ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.227785   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.key ...
	I0416 16:20:37.227797   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.key: {Name:mk3618038a8a8e5bd434236ab70706479010e8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.227863   11739 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key.16663f7b
	I0416 16:20:37.227880   11739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt.16663f7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.247]
	I0416 16:20:37.421130   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt.16663f7b ...
	I0416 16:20:37.421177   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt.16663f7b: {Name:mk75d1a57a155081891bfb12a29f30816b216c63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.421377   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key.16663f7b ...
	I0416 16:20:37.421396   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key.16663f7b: {Name:mk34e1b3ac76f04cea4f014be3a40a6a2b0e8fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.421502   11739 certs.go:381] copying /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt.16663f7b -> /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt
	I0416 16:20:37.421594   11739 certs.go:385] copying /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key.16663f7b -> /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key
	I0416 16:20:37.421676   11739 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.key
	I0416 16:20:37.421702   11739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.crt with IP's: []
	I0416 16:20:37.509226   11739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.crt ...
	I0416 16:20:37.509262   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.crt: {Name:mk9c3d287b8db878b1aacc52c4081f33bf154aa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.509455   11739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.key ...
	I0416 16:20:37.509471   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.key: {Name:mk03540289e6f1ad0891e734700dfcb3b7e40690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:37.509696   11739 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 16:20:37.509790   11739 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/ca.pem (1078 bytes)
	I0416 16:20:37.509836   11739 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:20:37.509872   11739 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3613/.minikube/certs/key.pem (1675 bytes)
	I0416 16:20:37.510443   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:20:37.542421   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0416 16:20:37.571601   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:20:37.600692   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 16:20:37.630011   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0416 16:20:37.659047   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:20:37.688474   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:20:37.717223   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:20:37.745401   11739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:20:37.774273   11739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:20:37.794785   11739 ssh_runner.go:195] Run: openssl version
	I0416 16:20:37.801401   11739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:20:37.815243   11739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:37.821066   11739 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:37.821158   11739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:37.827739   11739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:20:37.841521   11739 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:20:37.846661   11739 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:20:37.846707   11739 kubeadm.go:391] StartCluster: {Name:addons-012036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-012036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:20:37.846811   11739 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0416 16:20:37.846879   11739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:20:37.887029   11739 cri.go:89] found id: ""
	I0416 16:20:37.887113   11739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:20:37.899452   11739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:20:37.911686   11739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:20:37.923689   11739 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:20:37.923708   11739 kubeadm.go:156] found existing configuration files:
	
	I0416 16:20:37.923776   11739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:20:37.935176   11739 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:20:37.935233   11739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:20:37.947041   11739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:20:37.958616   11739 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:20:37.958688   11739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:20:37.970641   11739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:20:37.981907   11739 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:20:37.981976   11739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:20:37.993821   11739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:20:38.005466   11739 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:20:38.005529   11739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:20:38.017545   11739 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:20:38.072757   11739 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:20:38.072826   11739 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:20:38.253339   11739 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:20:38.253522   11739 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:20:38.253650   11739 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:20:38.507089   11739 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:20:38.510914   11739 out.go:204]   - Generating certificates and keys ...
	I0416 16:20:38.511035   11739 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:20:38.511162   11739 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:20:38.786353   11739 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:20:39.123890   11739 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:20:39.267597   11739 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:20:39.408043   11739 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:20:39.763797   11739 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:20:39.764066   11739 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-012036 localhost] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0416 16:20:40.136094   11739 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:20:40.136491   11739 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-012036 localhost] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0416 16:20:40.385981   11739 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:20:40.530577   11739 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:20:40.767039   11739 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:20:40.767418   11739 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:20:40.952976   11739 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:20:41.047614   11739 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:20:41.176543   11739 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:20:41.258363   11739 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:20:41.546069   11739 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:20:41.546780   11739 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:20:41.549402   11739 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:20:41.551622   11739 out.go:204]   - Booting up control plane ...
	I0416 16:20:41.551758   11739 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:20:41.552577   11739 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:20:41.553475   11739 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:20:41.572407   11739 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:20:41.575256   11739 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:20:41.575647   11739 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:20:41.717464   11739 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:20:48.218776   11739 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502352 seconds
	I0416 16:20:48.234635   11739 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:20:48.260642   11739 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:20:48.798917   11739 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:20:48.799127   11739 kubeadm.go:309] [mark-control-plane] Marking the node addons-012036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:20:49.316075   11739 kubeadm.go:309] [bootstrap-token] Using token: bz5n4w.4jwc771jzhysl5pt
	I0416 16:20:49.317890   11739 out.go:204]   - Configuring RBAC rules ...
	I0416 16:20:49.318055   11739 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:20:49.324756   11739 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:20:49.339981   11739 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:20:49.344286   11739 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:20:49.349419   11739 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:20:49.355926   11739 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:20:49.376312   11739 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:20:49.652583   11739 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:20:49.732260   11739 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:20:49.736953   11739 kubeadm.go:309] 
	I0416 16:20:49.737046   11739 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:20:49.737059   11739 kubeadm.go:309] 
	I0416 16:20:49.737183   11739 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:20:49.737202   11739 kubeadm.go:309] 
	I0416 16:20:49.737251   11739 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:20:49.737337   11739 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:20:49.737421   11739 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:20:49.737430   11739 kubeadm.go:309] 
	I0416 16:20:49.737514   11739 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:20:49.737535   11739 kubeadm.go:309] 
	I0416 16:20:49.737598   11739 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:20:49.737607   11739 kubeadm.go:309] 
	I0416 16:20:49.737669   11739 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:20:49.737773   11739 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:20:49.737859   11739 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:20:49.737871   11739 kubeadm.go:309] 
	I0416 16:20:49.737994   11739 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:20:49.738117   11739 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:20:49.738126   11739 kubeadm.go:309] 
	I0416 16:20:49.738218   11739 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bz5n4w.4jwc771jzhysl5pt \
	I0416 16:20:49.738335   11739 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3fa17152e5a024a90abff7235e0a39e3f709584e9dfd83eb49506ea6c646c588 \
	I0416 16:20:49.738386   11739 kubeadm.go:309] 	--control-plane 
	I0416 16:20:49.738403   11739 kubeadm.go:309] 
	I0416 16:20:49.738530   11739 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:20:49.738540   11739 kubeadm.go:309] 
	I0416 16:20:49.738656   11739 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bz5n4w.4jwc771jzhysl5pt \
	I0416 16:20:49.738799   11739 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3fa17152e5a024a90abff7235e0a39e3f709584e9dfd83eb49506ea6c646c588 
	I0416 16:20:49.741316   11739 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:20:49.741689   11739 cni.go:84] Creating CNI manager for ""
	I0416 16:20:49.741707   11739 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0416 16:20:49.743815   11739 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 16:20:49.745245   11739 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 16:20:49.774616   11739 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 16:20:49.805560   11739 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:20:49.805607   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:49.805642   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-012036 minikube.k8s.io/updated_at=2024_04_16T16_20_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=addons-012036 minikube.k8s.io/primary=true
	I0416 16:20:49.915016   11739 ops.go:34] apiserver oom_adj: -16
	I0416 16:20:50.077670   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:50.578229   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:51.077988   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:51.578499   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:52.078307   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:52.578373   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:53.078470   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:53.578383   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:54.078540   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:54.578741   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:55.078116   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:55.578325   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:56.077949   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:56.578096   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:57.078592   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:57.577958   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:58.078331   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:58.578091   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:59.078402   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:59.577895   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:00.078117   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:00.578037   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:01.077917   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:01.578705   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:02.078379   11739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:21:02.245365   11739 kubeadm.go:1107] duration metric: took 12.439805001s to wait for elevateKubeSystemPrivileges
	W0416 16:21:02.245422   11739 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:21:02.245432   11739 kubeadm.go:393] duration metric: took 24.398727479s to StartCluster
	I0416 16:21:02.245455   11739 settings.go:142] acquiring lock: {Name:mk33f15d448e67a39bb041d9835f1ffaf867de17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:21:02.245609   11739 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:21:02.246096   11739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3613/kubeconfig: {Name:mk4033fe222fc9823de19ea06fe9807d5ce31bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:21:02.246354   11739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:21:02.246387   11739 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0416 16:21:02.248506   11739 out.go:177] * Verifying Kubernetes components...
	I0416 16:21:02.246460   11739 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0416 16:21:02.246575   11739 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:21:02.249961   11739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:21:02.249990   11739 addons.go:69] Setting cloud-spanner=true in profile "addons-012036"
	I0416 16:21:02.250010   11739 addons.go:69] Setting default-storageclass=true in profile "addons-012036"
	I0416 16:21:02.250016   11739 addons.go:69] Setting gcp-auth=true in profile "addons-012036"
	I0416 16:21:02.250028   11739 addons.go:69] Setting inspektor-gadget=true in profile "addons-012036"
	I0416 16:21:02.250040   11739 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-012036"
	I0416 16:21:02.250046   11739 mustload.go:65] Loading cluster: addons-012036
	I0416 16:21:02.250054   11739 addons.go:69] Setting helm-tiller=true in profile "addons-012036"
	I0416 16:21:02.250046   11739 addons.go:69] Setting registry=true in profile "addons-012036"
	I0416 16:21:02.250060   11739 addons.go:234] Setting addon inspektor-gadget=true in "addons-012036"
	I0416 16:21:02.250053   11739 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-012036"
	I0416 16:21:02.250075   11739 addons.go:234] Setting addon helm-tiller=true in "addons-012036"
	I0416 16:21:02.250077   11739 addons.go:234] Setting addon registry=true in "addons-012036"
	I0416 16:21:02.250081   11739 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-012036"
	I0416 16:21:02.250093   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250104   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250118   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250178   11739 addons.go:69] Setting volumesnapshots=true in profile "addons-012036"
	I0416 16:21:02.250197   11739 addons.go:234] Setting addon volumesnapshots=true in "addons-012036"
	I0416 16:21:02.250213   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250273   11739 config.go:182] Loaded profile config "addons-012036": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:21:02.250522   11739 addons.go:69] Setting storage-provisioner=true in profile "addons-012036"
	I0416 16:21:02.250536   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250542   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250545   11739 addons.go:234] Setting addon storage-provisioner=true in "addons-012036"
	I0416 16:21:02.250552   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250553   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250563   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250570   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250573   11739 addons.go:69] Setting metrics-server=true in profile "addons-012036"
	I0416 16:21:02.250573   11739 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-012036"
	I0416 16:21:02.250584   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250593   11739 addons.go:234] Setting addon metrics-server=true in "addons-012036"
	I0416 16:21:02.250596   11739 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-012036"
	I0416 16:21:02.250618   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250625   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250646   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250657   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250673   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250922   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250971   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250987   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250046   11739 addons.go:234] Setting addon cloud-spanner=true in "addons-012036"
	I0416 16:21:02.251059   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.250539   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250922   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251218   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.249996   11739 addons.go:69] Setting ingress=true in profile "addons-012036"
	I0416 16:21:02.251262   11739 addons.go:234] Setting addon ingress=true in "addons-012036"
	I0416 16:21:02.250003   11739 addons.go:69] Setting ingress-dns=true in profile "addons-012036"
	I0416 16:21:02.251284   11739 addons.go:234] Setting addon ingress-dns=true in "addons-012036"
	I0416 16:21:02.251317   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.251394   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.250004   11739 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-012036"
	I0416 16:21:02.251499   11739 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-012036"
	I0416 16:21:02.251514   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.251531   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.251410   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251550   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.249995   11739 addons.go:69] Setting yakd=true in profile "addons-012036"
	I0416 16:21:02.251633   11739 addons.go:234] Setting addon yakd=true in "addons-012036"
	I0416 16:21:02.251658   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.251662   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251686   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251834   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251858   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251869   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.251887   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251987   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.252022   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.250955   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.252198   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251162   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.251445   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.273724   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0416 16:21:02.276412   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0416 16:21:02.276438   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.277527   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.277688   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.277707   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.278065   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.278085   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.278128   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.278477   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.278858   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.278903   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.279093   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.279112   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.285237   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0416 16:21:02.285832   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.286451   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.286473   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.286886   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.287169   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.289189   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0416 16:21:02.291429   11739 addons.go:234] Setting addon default-storageclass=true in "addons-012036"
	I0416 16:21:02.291481   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.291886   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.291932   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.292193   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I0416 16:21:02.292221   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I0416 16:21:02.292269   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0416 16:21:02.292946   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.293693   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.293714   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.293785   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0416 16:21:02.294257   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.294356   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.294880   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.294921   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.303918   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.303936   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.303970   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0416 16:21:02.304033   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0416 16:21:02.304087   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34307
	I0416 16:21:02.304264   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305289   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305303   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.305317   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305350   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305377   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305387   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.305603   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.305850   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.305864   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.305997   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.306006   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.306387   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.306539   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.306557   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.306743   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.307295   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.307314   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.307687   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.307709   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.307877   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.308550   11739 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-012036"
	I0416 16:21:02.308592   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.308833   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.308855   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.309350   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.309365   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.309670   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.309712   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.309851   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.310082   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.310260   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.310432   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.310509   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.310544   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.311157   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.311178   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.311636   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.312245   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.312278   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.312998   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:02.313366   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.313385   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.327391   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0416 16:21:02.327638   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.335514   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I0416 16:21:02.337663   11739 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0416 16:21:02.339396   11739 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0416 16:21:02.339418   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0416 16:21:02.339446   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.343572   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0416 16:21:02.343754   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.343814   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.343936   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.344043   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.344391   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.344411   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.344489   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.344559   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.344571   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.344576   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.344761   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.345012   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.345073   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.345141   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.345155   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.345676   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.345708   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.345915   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.346077   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.346100   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.346117   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.346338   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.346901   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.346953   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.347044   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I0416 16:21:02.347455   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.347667   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.347761   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I0416 16:21:02.347826   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.347841   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.349756   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0416 16:21:02.351259   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0416 16:21:02.349041   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0416 16:21:02.349084   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.349293   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.351347   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0416 16:21:02.351410   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.352475   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.352497   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.352520   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.352658   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.353090   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.353246   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.353695   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.353917   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.354354   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.354566   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.354809   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.356205   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.356202   11739 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0416 16:21:02.357524   11739 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0416 16:21:02.357538   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0416 16:21:02.357555   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.358866   11739 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0416 16:21:02.357040   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.357611   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.358341   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.358355   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
	I0416 16:21:02.360267   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.360309   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.361620   11739 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:21:02.360872   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.360987   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.361547   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.362233   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.363093   11739 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0416 16:21:02.364417   11739 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0416 16:21:02.364439   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0416 16:21:02.364458   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.363360   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.364525   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.363553   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.363011   11739 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:21:02.363573   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.364024   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.365208   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.366011   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.366415   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.366430   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.366512   11739 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0416 16:21:02.366527   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0416 16:21:02.366545   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.366596   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.367337   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.367401   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.369205   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.369589   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.369627   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.369842   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.369941   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.370207   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.370368   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.370542   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.370871   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.370911   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.371077   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.371240   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.371362   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.371485   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.371835   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43551
	I0416 16:21:02.372316   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44791
	I0416 16:21:02.372767   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.373311   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.373327   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.373766   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.373960   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.375887   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.376479   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.376496   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.376560   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0416 16:21:02.376960   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.377411   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.377547   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.377560   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.377891   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.378166   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.378207   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.378238   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.380326   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.382501   11739 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:21:02.383897   11739 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:21:02.383914   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:21:02.383936   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.386944   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0416 16:21:02.387888   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.387937   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.388591   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.388609   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.388683   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.388696   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.388882   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.388955   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.389501   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.389539   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.389885   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.390129   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.390345   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.392416   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0416 16:21:02.392818   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.393461   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.393477   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.393851   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.394025   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.395430   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0416 16:21:02.395991   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.398224   11739 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0416 16:21:02.396367   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.397289   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0416 16:21:02.399931   11739 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0416 16:21:02.399942   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0416 16:21:02.399962   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.400389   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.400488   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0416 16:21:02.400738   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.400752   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.400893   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.401431   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.401446   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.401701   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.401869   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.402013   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.402027   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.402367   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.402560   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.402584   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.403185   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.403225   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.404717   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.404788   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.404836   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35815
	I0416 16:21:02.405027   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39409
	I0416 16:21:02.405226   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.405309   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.405311   11739 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:21:02.405323   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:21:02.405339   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.405761   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.405864   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.406026   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.406220   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.406286   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.406902   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.406917   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.407050   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.407062   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.407674   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.407730   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.407783   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.408167   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.408472   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:02.408512   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:02.408752   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.410823   11739 out.go:177]   - Using image docker.io/registry:2.8.3
	I0416 16:21:02.410096   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.410999   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.413501   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.414615   11739 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0416 16:21:02.415904   11739 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0416 16:21:02.414838   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.415922   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0416 16:21:02.415941   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.414841   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0416 16:21:02.414886   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.416035   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.417441   11739 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0416 16:21:02.416295   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.416577   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.417323   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0416 16:21:02.418789   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.418946   11739 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0416 16:21:02.418958   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0416 16:21:02.418976   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.419259   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.419282   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.419321   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.419581   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.419845   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.420035   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.420150   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.420557   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.420572   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.420687   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.421104   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.421423   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.421998   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.422027   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.423072   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.423416   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.423456   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.423617   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.425388   11739 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0416 16:21:02.423643   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.423784   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.423870   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.426545   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43853
	I0416 16:21:02.428314   11739 out.go:177]   - Using image docker.io/busybox:stable
	I0416 16:21:02.427398   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.429762   11739 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0416 16:21:02.429775   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0416 16:21:02.427703   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.429790   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.428266   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.430026   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.431694   11739 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0416 16:21:02.430444   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.430522   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.431575   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0416 16:21:02.432673   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.433117   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.433145   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.433161   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.433290   11739 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 16:21:02.433302   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 16:21:02.433318   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.433488   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.433560   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.433620   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:02.433845   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.433892   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.434105   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:02.434117   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:02.434162   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.434358   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.434640   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:02.434898   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:02.436294   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.438347   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0416 16:21:02.436915   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:02.436940   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.437565   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.439846   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.439873   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.441162   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0416 16:21:02.439954   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.444305   11739 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0416 16:21:02.442751   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.445677   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0416 16:21:02.445690   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0416 16:21:02.445706   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.445732   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0416 16:21:02.447016   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0416 16:21:02.445887   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.448349   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0416 16:21:02.448696   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.449411   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.449787   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0416 16:21:02.449857   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.451123   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.449975   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.451085   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0416 16:21:02.451359   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.452836   11739 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0416 16:21:02.454126   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0416 16:21:02.454145   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0416 16:21:02.454161   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:02.453010   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:02.457458   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.457969   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:02.457992   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:02.458210   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:02.458408   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:02.458572   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:02.458724   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	W0416 16:21:02.461672   11739 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38990->192.168.39.247:22: read: connection reset by peer
	I0416 16:21:02.461701   11739 retry.go:31] will retry after 189.459023ms: ssh: handshake failed: read tcp 192.168.39.1:38990->192.168.39.247:22: read: connection reset by peer
	W0416 16:21:02.461755   11739 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38994->192.168.39.247:22: read: connection reset by peer
	I0416 16:21:02.461762   11739 retry.go:31] will retry after 218.884854ms: ssh: handshake failed: read tcp 192.168.39.1:38994->192.168.39.247:22: read: connection reset by peer
	I0416 16:21:03.125540   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:21:03.152937   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0416 16:21:03.160973   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:21:03.217662   11739 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0416 16:21:03.217693   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0416 16:21:03.235220   11739 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0416 16:21:03.235240   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0416 16:21:03.271948   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0416 16:21:03.273061   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0416 16:21:03.335410   11739 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0416 16:21:03.335430   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0416 16:21:03.367315   11739 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0416 16:21:03.367343   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0416 16:21:03.383532   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0416 16:21:03.449647   11739 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 16:21:03.449679   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0416 16:21:03.558017   11739 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.31162988s)
	I0416 16:21:03.558124   11739 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.308134967s)
	I0416 16:21:03.558206   11739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:21:03.558259   11739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:21:03.604155   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0416 16:21:03.604185   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0416 16:21:03.675082   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0416 16:21:03.712954   11739 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0416 16:21:03.712991   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0416 16:21:03.725620   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0416 16:21:03.725652   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0416 16:21:03.957137   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0416 16:21:03.986823   11739 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0416 16:21:03.986857   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0416 16:21:04.002037   11739 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0416 16:21:04.002071   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0416 16:21:04.076353   11739 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0416 16:21:04.076384   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0416 16:21:04.109301   11739 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 16:21:04.109336   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 16:21:04.207442   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0416 16:21:04.207470   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0416 16:21:04.285930   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0416 16:21:04.285954   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0416 16:21:04.433178   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0416 16:21:04.447688   11739 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0416 16:21:04.447709   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0416 16:21:04.451980   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0416 16:21:04.452012   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0416 16:21:04.527445   11739 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0416 16:21:04.529677   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0416 16:21:04.535430   11739 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 16:21:04.535457   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 16:21:04.559750   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0416 16:21:04.559776   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0416 16:21:04.632335   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0416 16:21:04.632359   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0416 16:21:04.737197   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 16:21:04.738632   11739 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0416 16:21:04.738649   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0416 16:21:04.851978   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0416 16:21:04.852013   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0416 16:21:04.853932   11739 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0416 16:21:04.853952   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0416 16:21:05.066419   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0416 16:21:05.124481   11739 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:21:05.124509   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0416 16:21:05.289397   11739 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0416 16:21:05.289425   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0416 16:21:05.330380   11739 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0416 16:21:05.330407   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0416 16:21:05.358721   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:21:05.518156   11739 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0416 16:21:05.518181   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0416 16:21:05.666777   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0416 16:21:05.666801   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0416 16:21:05.789309   11739 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0416 16:21:05.789335   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0416 16:21:05.946238   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0416 16:21:05.946272   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0416 16:21:06.086159   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0416 16:21:06.173694   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0416 16:21:06.173728   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0416 16:21:06.569130   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0416 16:21:06.569155   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0416 16:21:06.933508   11739 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0416 16:21:06.933541   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0416 16:21:07.192066   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0416 16:21:08.620731   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.495153458s)
	I0416 16:21:08.620786   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.467813231s)
	I0416 16:21:08.620824   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.620832   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.459833165s)
	I0416 16:21:08.620861   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.620879   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.620836   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.620792   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.620935   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.620972   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.348997284s)
	I0416 16:21:08.621005   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621018   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621032   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.347948607s)
	I0416 16:21:08.621057   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621069   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621349   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621354   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621392   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621409   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.621423   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621435   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621455   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621411   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621502   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621514   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.621524   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621531   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621586   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621607   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.621638   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.621653   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.621915   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621933   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621938   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.621957   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621962   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.621964   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.621969   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.622011   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.622031   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.622047   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.622063   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.623108   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.623161   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.623169   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.623294   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.623306   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.623315   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.623323   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.623917   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.623929   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.623965   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:08.623982   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.623989   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:08.705212   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:08.705234   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:08.705610   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:08.705630   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:09.229269   11739 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0416 16:21:09.229318   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:09.232687   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:09.233197   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:09.233234   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:09.233439   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:09.233683   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:09.233874   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:09.234078   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:09.678964   11739 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0416 16:21:10.109897   11739 addons.go:234] Setting addon gcp-auth=true in "addons-012036"
	I0416 16:21:10.109953   11739 host.go:66] Checking if "addons-012036" exists ...
	I0416 16:21:10.110378   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:10.110421   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:10.126412   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0416 16:21:10.126911   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:10.127512   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:10.127542   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:10.127967   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:10.128454   11739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:21:10.128487   11739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:21:10.145505   11739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0416 16:21:10.145939   11739 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:21:10.146434   11739 main.go:141] libmachine: Using API Version  1
	I0416 16:21:10.146452   11739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:21:10.146818   11739 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:21:10.147052   11739 main.go:141] libmachine: (addons-012036) Calling .GetState
	I0416 16:21:10.148828   11739 main.go:141] libmachine: (addons-012036) Calling .DriverName
	I0416 16:21:10.149088   11739 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0416 16:21:10.149117   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHHostname
	I0416 16:21:10.151756   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:10.152182   11739 main.go:141] libmachine: (addons-012036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cf:c9", ip: ""} in network mk-addons-012036: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:16 +0000 UTC Type:0 Mac:52:54:00:dd:cf:c9 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-012036 Clientid:01:52:54:00:dd:cf:c9}
	I0416 16:21:10.152207   11739 main.go:141] libmachine: (addons-012036) DBG | domain addons-012036 has defined IP address 192.168.39.247 and MAC address 52:54:00:dd:cf:c9 in network mk-addons-012036
	I0416 16:21:10.152375   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHPort
	I0416 16:21:10.152554   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHKeyPath
	I0416 16:21:10.152707   11739 main.go:141] libmachine: (addons-012036) Calling .GetSSHUsername
	I0416 16:21:10.152863   11739 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/addons-012036/id_rsa Username:docker}
	I0416 16:21:12.906642   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.523070287s)
	I0416 16:21:12.906687   11739 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.348452998s)
	I0416 16:21:12.906744   11739 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.348457718s)
	I0416 16:21:12.906776   11739 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0416 16:21:12.906694   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.906803   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.906891   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.231777377s)
	I0416 16:21:12.906927   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.906936   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.906935   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.949755196s)
	I0416 16:21:12.906955   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.906972   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907021   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.473810332s)
	I0416 16:21:12.907042   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907051   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907163   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.169939474s)
	I0416 16:21:12.907165   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907185   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907196   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907208   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907217   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.907226   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907234   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907255   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907268   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.840822578s)
	I0416 16:21:12.907279   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907282   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907310   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907287   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.907321   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907327   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907373   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.548614454s)
	W0416 16:21:12.907404   11739 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0416 16:21:12.907425   11739 retry.go:31] will retry after 349.044494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0416 16:21:12.907448   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.82125895s)
	I0416 16:21:12.907470   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.907479   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.907582   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907616   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907623   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.907633   11739 addons.go:470] Verifying addon ingress=true in "addons-012036"
	I0416 16:21:12.907684   11739 node_ready.go:35] waiting up to 6m0s for node "addons-012036" to be "Ready" ...
	I0416 16:21:12.911340   11739 out.go:177] * Verifying ingress addon...
	I0416 16:21:12.907829   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907847   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.907868   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907873   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907892   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.907892   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.910021   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.910039   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.910044   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.910043   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.910066   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.910069   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.912892   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.912909   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.912927   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.912941   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.912952   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.912972   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.912980   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.912955   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.912912   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.913059   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.912984   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.913094   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.913100   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.912893   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.913138   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:12.913146   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:12.913757   11739 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0416 16:21:12.914881   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.914891   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.914886   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.914891   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.914902   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.914907   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.914910   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.914932   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.914945   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.914949   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.916813   11739 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-012036 service yakd-dashboard -n yakd-dashboard
	
	I0416 16:21:12.914954   11739 addons.go:470] Verifying addon metrics-server=true in "addons-012036"
	I0416 16:21:12.914936   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.914973   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:12.914976   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:12.918607   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.918625   11739 addons.go:470] Verifying addon registry=true in "addons-012036"
	I0416 16:21:12.918627   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:12.920437   11739 out.go:177] * Verifying registry addon...
	I0416 16:21:12.922391   11739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0416 16:21:12.952888   11739 node_ready.go:49] node "addons-012036" has status "Ready":"True"
	I0416 16:21:12.952921   11739 node_ready.go:38] duration metric: took 45.217454ms for node "addons-012036" to be "Ready" ...
	I0416 16:21:12.952933   11739 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:21:12.985362   11739 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0416 16:21:12.985388   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:13.026256   11739 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0416 16:21:13.026277   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:13.064219   11739 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gl82p" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:13.106683   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:13.106721   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:13.107030   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:13.107047   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:13.257166   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:21:13.414506   11739 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-012036" context rescaled to 1 replicas
	I0416 16:21:13.433313   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:13.458987   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:13.918718   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:13.939568   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:14.454317   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:14.455338   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:14.951464   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:14.984151   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:15.007238   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.815108326s)
	I0416 16:21:15.007273   11739 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.858161577s)
	I0416 16:21:15.007303   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:15.007322   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:15.009451   11739 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:21:15.007612   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:15.007648   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:15.011311   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:15.011322   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:15.011329   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:15.013198   11739 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0416 16:21:15.011598   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:15.011628   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:15.014988   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:15.015003   11739 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-012036"
	I0416 16:21:15.015032   11739 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0416 16:21:15.015057   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0416 16:21:15.016775   11739 out.go:177] * Verifying csi-hostpath-driver addon...
	I0416 16:21:15.019486   11739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0416 16:21:15.091562   11739 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0416 16:21:15.091596   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:15.157055   11739 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0416 16:21:15.157080   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0416 16:21:15.205622   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:15.294230   11739 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0416 16:21:15.294255   11739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0416 16:21:15.452946   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:15.463866   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:15.507523   11739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0416 16:21:15.533002   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:15.919690   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:15.928648   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:16.025982   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:16.354686   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.097470069s)
	I0416 16:21:16.354749   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:16.354765   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:16.355101   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:16.355117   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:16.355130   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:16.355154   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:16.355157   11739 main.go:141] libmachine: (addons-012036) DBG | Closing plugin on server side
	I0416 16:21:16.355405   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:16.355423   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:16.422759   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:16.428095   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:16.529478   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:16.946125   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:16.951962   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:17.055631   11739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.548068318s)
	I0416 16:21:17.055685   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:17.055699   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:17.055814   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:17.056015   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:17.056032   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:17.056041   11739 main.go:141] libmachine: Making call to close driver server
	I0416 16:21:17.056049   11739 main.go:141] libmachine: (addons-012036) Calling .Close
	I0416 16:21:17.056363   11739 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:21:17.056383   11739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:21:17.058704   11739 addons.go:470] Verifying addon gcp-auth=true in "addons-012036"
	I0416 16:21:17.060363   11739 out.go:177] * Verifying gcp-auth addon...
	I0416 16:21:17.062577   11739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0416 16:21:17.084754   11739 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0416 16:21:17.084783   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:17.418498   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:17.429300   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:17.528729   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:17.571335   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:17.575611   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:17.920509   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:17.929016   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:18.026504   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:18.069437   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:18.419342   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:18.428828   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:18.528965   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:18.687813   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:18.918877   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:18.928510   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:19.028855   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:19.066768   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:19.422740   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:19.433720   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:19.527065   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:19.568612   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:19.918988   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:19.929292   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:20.026828   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:20.070029   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:20.072712   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:20.419631   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:20.427038   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:20.526013   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:20.569871   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:20.918782   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:20.929007   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:21.026968   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:21.069316   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:21.419284   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:21.431226   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:21.526116   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:21.569704   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:21.920799   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:21.930105   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:22.026697   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:22.068153   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:22.074572   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:22.419675   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:22.427675   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:22.527464   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:22.586536   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:22.919890   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:22.932507   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:23.029985   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:23.321093   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:23.423645   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:23.433621   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:23.526249   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:23.567208   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:23.919063   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:23.927455   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:24.025889   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:24.067262   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:24.419219   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:24.427363   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:24.526297   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:24.566506   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:24.572700   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:24.920426   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:24.928796   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:25.028600   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:25.071003   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:25.426558   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:25.434100   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:25.544386   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:25.571721   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:25.921057   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:25.929363   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:26.027998   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:26.070517   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:26.419234   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:26.428007   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:26.526370   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:26.568441   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:26.919244   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:26.941230   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:27.027059   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:27.066551   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:27.072762   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:27.419116   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:27.429041   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:27.526043   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:27.571483   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:27.977563   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:27.978471   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:28.026437   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:28.067572   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:28.419017   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:28.428929   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:28.527080   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:28.566312   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:28.920840   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:28.932883   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:29.027460   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:29.066656   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:29.419461   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:29.428031   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:29.528289   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:29.570994   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:29.574892   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:29.918335   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:29.927928   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:30.026471   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:30.069840   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:30.421753   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:30.427721   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:30.540511   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:30.568451   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:30.920571   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:30.928588   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:31.026502   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:31.066534   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:31.428180   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:31.437014   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:31.525627   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:31.569411   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:31.575879   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:31.919571   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:31.927697   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:32.027713   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:32.072358   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:32.418907   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:32.433717   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:32.527771   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:32.605694   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:32.919662   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:32.928640   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:33.028527   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:33.068989   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:33.436061   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:33.439057   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:33.525595   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:33.567846   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:33.919668   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:33.927768   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:34.026260   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:34.068943   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:34.071809   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:34.418935   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:34.429259   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:34.528664   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:34.568883   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:34.918810   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:34.934556   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:35.026726   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:35.067919   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:35.433784   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:35.433951   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:35.528585   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:35.566789   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:35.919488   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:35.928195   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:36.026872   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:36.068872   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:36.072342   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:36.419292   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:36.426967   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:36.533258   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:36.570771   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:36.919626   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:36.927951   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:37.037785   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:37.067052   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:37.422569   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:37.439599   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:37.529498   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:37.567901   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:37.918424   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:37.933129   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:38.026714   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:38.066818   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:38.079951   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:38.420809   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:38.429074   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:38.538509   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:38.567575   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:38.918180   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:38.928164   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:39.026326   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:39.068757   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:39.419843   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:39.428441   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:39.544236   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:39.579130   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:39.918954   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:39.928605   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:40.028791   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:40.069156   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:40.419662   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:40.428726   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:40.531405   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:40.580027   11739 pod_ready.go:102] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"False"
	I0416 16:21:40.581930   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.157259   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.158600   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:41.168482   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:41.173406   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:41.418383   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:41.427485   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:41.526159   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:41.568324   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.919921   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:41.929966   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:42.025604   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:42.068805   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:42.419268   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:42.427905   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:42.532156   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:42.599755   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:42.600033   11739 pod_ready.go:92] pod "coredns-76f75df574-gl82p" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.600055   11739 pod_ready.go:81] duration metric: took 29.53580147s for pod "coredns-76f75df574-gl82p" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.600066   11739 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wvjzk" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.606322   11739 pod_ready.go:97] error getting pod "coredns-76f75df574-wvjzk" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-wvjzk" not found
	I0416 16:21:42.606363   11739 pod_ready.go:81] duration metric: took 6.281949ms for pod "coredns-76f75df574-wvjzk" in "kube-system" namespace to be "Ready" ...
	E0416 16:21:42.606377   11739 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-wvjzk" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-wvjzk" not found
	I0416 16:21:42.606386   11739 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.676961   11739 pod_ready.go:92] pod "etcd-addons-012036" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.676988   11739 pod_ready.go:81] duration metric: took 70.59396ms for pod "etcd-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.677001   11739 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.740484   11739 pod_ready.go:92] pod "kube-apiserver-addons-012036" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.740511   11739 pod_ready.go:81] duration metric: took 63.502271ms for pod "kube-apiserver-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.740525   11739 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.763193   11739 pod_ready.go:92] pod "kube-controller-manager-addons-012036" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.763222   11739 pod_ready.go:81] duration metric: took 22.689553ms for pod "kube-controller-manager-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.763240   11739 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6dq9" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.784485   11739 pod_ready.go:92] pod "kube-proxy-s6dq9" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:42.784518   11739 pod_ready.go:81] duration metric: took 21.270314ms for pod "kube-proxy-s6dq9" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.784530   11739 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:42.921372   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:42.928974   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:43.027687   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:43.070244   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:43.168434   11739 pod_ready.go:92] pod "kube-scheduler-addons-012036" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:43.168458   11739 pod_ready.go:81] duration metric: took 383.92007ms for pod "kube-scheduler-addons-012036" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.168469   11739 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-75d6c48ddd-rh5ch" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.418971   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:43.428659   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:43.525960   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:43.568354   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:43.570224   11739 pod_ready.go:92] pod "metrics-server-75d6c48ddd-rh5ch" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:43.570255   11739 pod_ready.go:81] duration metric: took 401.77778ms for pod "metrics-server-75d6c48ddd-rh5ch" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.570270   11739 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nwsz2" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.918550   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:43.927375   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:43.968212   11739 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-nwsz2" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:43.968238   11739 pod_ready.go:81] duration metric: took 397.960681ms for pod "nvidia-device-plugin-daemonset-nwsz2" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:43.968255   11739 pod_ready.go:38] duration metric: took 31.015307403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:21:43.968269   11739 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:21:43.968321   11739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:21:43.998399   11739 api_server.go:72] duration metric: took 41.751973974s to wait for apiserver process to appear ...
	I0416 16:21:43.998429   11739 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:21:43.998451   11739 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I0416 16:21:44.008704   11739 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I0416 16:21:44.011206   11739 api_server.go:141] control plane version: v1.29.3
	I0416 16:21:44.011235   11739 api_server.go:131] duration metric: took 12.80009ms to wait for apiserver health ...
	I0416 16:21:44.011243   11739 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:21:44.045094   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:44.068699   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:44.320585   11739 system_pods.go:59] 18 kube-system pods found
	I0416 16:21:44.320619   11739 system_pods.go:61] "coredns-76f75df574-gl82p" [ce0d912e-d8fc-45eb-a25f-3cdbe67e511c] Running
	I0416 16:21:44.320626   11739 system_pods.go:61] "csi-hostpath-attacher-0" [60a4dcb7-fc8d-45d7-912a-052b70ffedea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0416 16:21:44.320634   11739 system_pods.go:61] "csi-hostpath-resizer-0" [ed11f0c4-aade-4f74-ae20-250260b20010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0416 16:21:44.320643   11739 system_pods.go:61] "csi-hostpathplugin-vfbkp" [6942c4bf-39db-43ca-bf0e-52f91546c9da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0416 16:21:44.320649   11739 system_pods.go:61] "etcd-addons-012036" [501c490e-9df4-4d77-ab24-6b1c484f3f57] Running
	I0416 16:21:44.320654   11739 system_pods.go:61] "kube-apiserver-addons-012036" [a206cfa9-3edb-411e-85d6-c5973862d675] Running
	I0416 16:21:44.320659   11739 system_pods.go:61] "kube-controller-manager-addons-012036" [5efce1ab-3b04-4892-b978-41d3132da3f9] Running
	I0416 16:21:44.320669   11739 system_pods.go:61] "kube-ingress-dns-minikube" [0445f263-dae8-46f5-a610-7bf97d2e8310] Running
	I0416 16:21:44.320678   11739 system_pods.go:61] "kube-proxy-s6dq9" [3870d3d7-c051-4d2c-aaed-8b4e4e59d483] Running
	I0416 16:21:44.320683   11739 system_pods.go:61] "kube-scheduler-addons-012036" [5d9ec397-85be-4b49-934c-bce74b51177d] Running
	I0416 16:21:44.320688   11739 system_pods.go:61] "metrics-server-75d6c48ddd-rh5ch" [dd9e68e9-89db-492e-b995-43adcef90c7b] Running
	I0416 16:21:44.320693   11739 system_pods.go:61] "nvidia-device-plugin-daemonset-nwsz2" [c725f54f-6971-493f-bfd5-62cf6aec55cd] Running
	I0416 16:21:44.320696   11739 system_pods.go:61] "registry-jcxdc" [b635d906-6cfa-4550-af73-b2a6efeed3a1] Running
	I0416 16:21:44.320700   11739 system_pods.go:61] "registry-proxy-vnvqm" [337f4757-d2bc-47a6-a02c-27da4429dc2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0416 16:21:44.320707   11739 system_pods.go:61] "snapshot-controller-58dbcc7b99-dmcpx" [776bbbd0-0b95-4985-8780-201db3bb42a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:44.320718   11739 system_pods.go:61] "snapshot-controller-58dbcc7b99-wr6z2" [213f9675-e555-47a7-82fc-5a5323329e00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:44.320725   11739 system_pods.go:61] "storage-provisioner" [943be509-0cb7-46d3-be2a-414fc7408f93] Running
	I0416 16:21:44.320730   11739 system_pods.go:61] "tiller-deploy-7b677967b9-jqj87" [fa15f4cf-8401-4c01-8f66-8e92e3945327] Running
	I0416 16:21:44.320739   11739 system_pods.go:74] duration metric: took 309.489554ms to wait for pod list to return data ...
	I0416 16:21:44.320749   11739 default_sa.go:34] waiting for default service account to be created ...
	I0416 16:21:44.368246   11739 default_sa.go:45] found service account: "default"
	I0416 16:21:44.368274   11739 default_sa.go:55] duration metric: took 47.515468ms for default service account to be created ...
	I0416 16:21:44.368282   11739 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 16:21:44.423629   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:44.429057   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:44.526289   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:44.566300   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:44.577771   11739 system_pods.go:86] 18 kube-system pods found
	I0416 16:21:44.577814   11739 system_pods.go:89] "coredns-76f75df574-gl82p" [ce0d912e-d8fc-45eb-a25f-3cdbe67e511c] Running
	I0416 16:21:44.577823   11739 system_pods.go:89] "csi-hostpath-attacher-0" [60a4dcb7-fc8d-45d7-912a-052b70ffedea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0416 16:21:44.577831   11739 system_pods.go:89] "csi-hostpath-resizer-0" [ed11f0c4-aade-4f74-ae20-250260b20010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0416 16:21:44.577838   11739 system_pods.go:89] "csi-hostpathplugin-vfbkp" [6942c4bf-39db-43ca-bf0e-52f91546c9da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0416 16:21:44.577844   11739 system_pods.go:89] "etcd-addons-012036" [501c490e-9df4-4d77-ab24-6b1c484f3f57] Running
	I0416 16:21:44.577850   11739 system_pods.go:89] "kube-apiserver-addons-012036" [a206cfa9-3edb-411e-85d6-c5973862d675] Running
	I0416 16:21:44.577857   11739 system_pods.go:89] "kube-controller-manager-addons-012036" [5efce1ab-3b04-4892-b978-41d3132da3f9] Running
	I0416 16:21:44.577864   11739 system_pods.go:89] "kube-ingress-dns-minikube" [0445f263-dae8-46f5-a610-7bf97d2e8310] Running
	I0416 16:21:44.577870   11739 system_pods.go:89] "kube-proxy-s6dq9" [3870d3d7-c051-4d2c-aaed-8b4e4e59d483] Running
	I0416 16:21:44.577876   11739 system_pods.go:89] "kube-scheduler-addons-012036" [5d9ec397-85be-4b49-934c-bce74b51177d] Running
	I0416 16:21:44.577887   11739 system_pods.go:89] "metrics-server-75d6c48ddd-rh5ch" [dd9e68e9-89db-492e-b995-43adcef90c7b] Running
	I0416 16:21:44.577893   11739 system_pods.go:89] "nvidia-device-plugin-daemonset-nwsz2" [c725f54f-6971-493f-bfd5-62cf6aec55cd] Running
	I0416 16:21:44.577903   11739 system_pods.go:89] "registry-jcxdc" [b635d906-6cfa-4550-af73-b2a6efeed3a1] Running
	I0416 16:21:44.577915   11739 system_pods.go:89] "registry-proxy-vnvqm" [337f4757-d2bc-47a6-a02c-27da4429dc2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0416 16:21:44.577927   11739 system_pods.go:89] "snapshot-controller-58dbcc7b99-dmcpx" [776bbbd0-0b95-4985-8780-201db3bb42a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:44.577937   11739 system_pods.go:89] "snapshot-controller-58dbcc7b99-wr6z2" [213f9675-e555-47a7-82fc-5a5323329e00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:44.577945   11739 system_pods.go:89] "storage-provisioner" [943be509-0cb7-46d3-be2a-414fc7408f93] Running
	I0416 16:21:44.577950   11739 system_pods.go:89] "tiller-deploy-7b677967b9-jqj87" [fa15f4cf-8401-4c01-8f66-8e92e3945327] Running
	I0416 16:21:44.577961   11739 system_pods.go:126] duration metric: took 209.673583ms to wait for k8s-apps to be running ...
	I0416 16:21:44.577971   11739 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 16:21:44.578031   11739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:21:44.596715   11739 system_svc.go:56] duration metric: took 18.736097ms WaitForService to wait for kubelet
	I0416 16:21:44.596755   11739 kubeadm.go:576] duration metric: took 42.350333594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:21:44.596781   11739 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:21:44.769176   11739 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:21:44.769208   11739 node_conditions.go:123] node cpu capacity is 2
	I0416 16:21:44.769219   11739 node_conditions.go:105] duration metric: took 172.432936ms to run NodePressure ...
	I0416 16:21:44.769230   11739 start.go:240] waiting for startup goroutines ...
	I0416 16:21:44.918938   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:44.928007   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:45.026009   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:45.067067   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:45.420070   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:45.433085   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:45.526156   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:45.567468   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:45.919498   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:45.928749   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:46.034305   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:46.067107   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:46.423238   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:46.429876   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:46.540331   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:46.573821   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:46.924514   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:46.930047   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:47.035215   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:47.068568   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:47.419016   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:47.429188   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:47.542077   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:47.567311   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:47.920095   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:47.929300   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:48.032233   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:48.067144   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:48.419797   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:48.429324   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:48.533309   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:48.567740   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:48.920755   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:48.928808   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:49.028510   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:49.071276   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:49.422874   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:49.433560   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:49.563574   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:49.567409   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:49.918647   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:49.929876   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:50.031612   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:50.067663   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:50.419419   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:50.427666   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:50.526517   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:50.567605   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:50.920134   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:50.935196   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:51.027228   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:51.068506   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:51.419537   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:51.428296   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:51.527417   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:51.566988   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:51.918679   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:51.928245   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:52.026395   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:52.067348   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:52.608007   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:52.609694   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:52.610067   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:52.611916   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:52.921796   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:52.927999   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:53.029758   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:53.068596   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:53.421296   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:53.428034   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:53.526984   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:53.571293   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:53.918982   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:53.928550   11739 kapi.go:107] duration metric: took 41.006157226s to wait for kubernetes.io/minikube-addons=registry ...
	I0416 16:21:54.027826   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:54.066273   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:54.420492   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:54.527817   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:54.566925   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:54.919980   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:55.026302   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:55.067536   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:55.422916   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:55.528943   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:55.569114   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:55.919250   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:56.028236   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:56.066943   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:56.418523   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:56.528961   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:56.573769   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.095128   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.096024   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:57.099583   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:57.419282   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:57.526351   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:57.567086   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.918654   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:58.026864   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:58.066933   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:58.421658   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:58.532139   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:58.567278   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:58.920580   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:59.026632   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:59.067998   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:59.418818   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:59.533691   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:59.567356   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:59.918597   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:00.026652   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:00.068887   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:00.418676   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:00.527773   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:00.567245   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:00.919363   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:01.025654   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:01.067407   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:01.420162   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:01.657351   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:01.666918   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:01.918890   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:02.027098   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:02.068887   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:02.426828   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:02.536138   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:02.569596   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:02.922699   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:03.026164   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:03.067186   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:03.422306   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:03.526396   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:03.566535   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:03.919387   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:04.028116   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:04.066990   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:04.419072   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:04.526527   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:04.567490   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:04.926334   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:05.026091   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:05.068499   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:05.420077   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:05.526427   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:05.566637   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:05.919430   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:06.025799   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:06.075097   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:06.428150   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:06.533086   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:06.567956   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:06.919028   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:07.026475   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:07.066433   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:07.420959   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:07.526043   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:07.566841   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:07.918663   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:08.026995   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:08.067000   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:08.418656   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:08.525750   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:22:08.568222   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:08.923294   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:09.026362   11739 kapi.go:107] duration metric: took 54.006877276s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0416 16:22:09.066971   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:09.421417   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:09.567581   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:09.920236   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:10.067033   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:10.418970   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:10.567521   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:10.921107   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:11.067581   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:11.419399   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:11.567860   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:11.919496   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:12.067248   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:12.420964   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:12.567742   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:12.918725   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:13.067011   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:13.419164   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:13.568325   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:13.919626   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:14.068051   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:14.418694   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:14.568097   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:14.920642   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:15.067859   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:15.418672   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:15.567575   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:15.923572   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:16.067436   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:16.419522   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:16.567392   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:16.919809   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:17.066983   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:17.433300   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:17.567843   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:18.217053   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:18.221262   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:18.421781   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:18.567043   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:18.919170   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:19.067583   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:19.424987   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:19.567389   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:19.920642   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:20.067729   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:20.423612   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:20.566692   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:20.923986   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:21.068049   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:21.449608   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:21.566403   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:21.919594   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:22.066559   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:22.421085   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:22.567447   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:22.920995   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:23.067623   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:23.442449   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:23.570662   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:23.976244   11739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:24.088177   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:24.422768   11739 kapi.go:107] duration metric: took 1m11.509008192s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0416 16:22:24.574530   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:25.067008   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:25.566259   11739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:26.068803   11739 kapi.go:107] duration metric: took 1m9.006220739s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0416 16:22:26.070824   11739 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-012036 cluster.
	I0416 16:22:26.072285   11739 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0416 16:22:26.073800   11739 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0416 16:22:26.075306   11739 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, helm-tiller, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0416 16:22:26.076737   11739 addons.go:505] duration metric: took 1m23.830287578s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass helm-tiller metrics-server yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0416 16:22:26.076791   11739 start.go:245] waiting for cluster config update ...
	I0416 16:22:26.076808   11739 start.go:254] writing updated cluster config ...
	I0416 16:22:26.077064   11739 ssh_runner.go:195] Run: rm -f paused
	I0416 16:22:26.137028   11739 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 16:22:26.139159   11739 out.go:177] * Done! kubectl is now configured to use "addons-012036" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	4ce8234c78e56       98f6c3b32d565       3 seconds ago        Exited              helm-test                    0                   a06a639a22bde       helm-test
	e6e500e0cfb34       dd1b12fcb6097       5 seconds ago        Running             hello-world-app              0                   95221feb08a59       hello-world-app-5d77478584-9rvj4
	503c45024e017       e289a478ace02       16 seconds ago       Running             nginx                        0                   4f7f5d61c0843       nginx
	ed6ddd1e144f3       7373e995f4086       18 seconds ago       Running             headlamp                     0                   0ce3daf3e73e6       headlamp-5b77dbd7c4-z758s
	e52e5d108177b       a416a98b71e22       21 seconds ago       Exited              helper-pod                   0                   da673e61d30c1       helper-pod-delete-pvc-8f41ec9b-ffc7-4a6a-90f0-74da7d87242a
	8458c44bf38f0       ba5dc23f65d4c       25 seconds ago       Exited              busybox                      0                   1c5c3aa926f39       test-local-path
	64447d010527c       db2fc13d44d50       47 seconds ago       Running             gcp-auth                     0                   a38442e42d2f9       gcp-auth-7d69788767-6prgz
	a344f59b6b138       ffcc66479b5ba       50 seconds ago       Running             controller                   0                   0bdec488f9cff       ingress-nginx-controller-65496f9567-88dw2
	86f1572e10b06       59cbb42146a37       About a minute ago   Exited              csi-attacher                 0                   a4a08f51702c8       csi-hostpath-attacher-0
	5a2c5d1c2d8f8       b29d748098e32       About a minute ago   Exited              patch                        0                   cd5b763a125cb       ingress-nginx-admission-patch-kscqv
	bf89a3e6bbb5d       b29d748098e32       About a minute ago   Exited              create                       0                   d9d1f31083959       ingress-nginx-admission-create-zpdtd
	3c4ada40b02b1       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller   0                   0d585ad0b737c       snapshot-controller-58dbcc7b99-wr6z2
	c25c9e32964c8       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller   0                   66c66b59d6426       snapshot-controller-58dbcc7b99-dmcpx
	bba135a45c3af       e16d1e3a10667       About a minute ago   Running             local-path-provisioner       0                   7f73a664b403d       local-path-provisioner-78b46b4d5c-fgvc7
	70f2718c1ab6c       31de47c733c91       About a minute ago   Running             yakd                         0                   e355a26fbd015       yakd-dashboard-9947fc6bf-knpbf
	0830eb1f2606b       3f39089e90831       About a minute ago   Exited              tiller                       0                   56822ad5c90e2       tiller-deploy-7b677967b9-jqj87
	d754b9971ad2d       6e38f40d628db       2 minutes ago        Running             storage-provisioner          0                   679c17820d273       storage-provisioner
	f7179288f854b       cbb01a7bd410d       2 minutes ago        Running             coredns                      0                   61d82bf55b66d       coredns-76f75df574-gl82p
	b656b7633700b       a1d263b5dc5b0       2 minutes ago        Running             kube-proxy                   0                   2d6ab0273ee54       kube-proxy-s6dq9
	24af4e069b22f       8c390d98f50c0       2 minutes ago        Running             kube-scheduler               0                   dbf77639f3fd7       kube-scheduler-addons-012036
	48a1e53b66a23       39f995c9f1996       2 minutes ago        Running             kube-apiserver               0                   09c33e1ba2865       kube-apiserver-addons-012036
	085bd521d80e6       3861cfcd7c04c       2 minutes ago        Running             etcd                         0                   3472a3055087b       etcd-addons-012036
	87ef232e07b96       6052a25da3f97       2 minutes ago        Running             kube-controller-manager      0                   fc66104249ac6       kube-controller-manager-addons-012036
	
	
	==> containerd <==
	Apr 16 16:23:10 addons-012036 containerd[649]: time="2024-04-16T16:23:10.458023379Z" level=info msg="shim disconnected" id=4ce8234c78e56e739dd4a1c4dd38418eb2a57ffa8ecd9c21e0a9e8766c979468 namespace=k8s.io
	Apr 16 16:23:10 addons-012036 containerd[649]: time="2024-04-16T16:23:10.458341125Z" level=warning msg="cleaning up after shim disconnected" id=4ce8234c78e56e739dd4a1c4dd38418eb2a57ffa8ecd9c21e0a9e8766c979468 namespace=k8s.io
	Apr 16 16:23:10 addons-012036 containerd[649]: time="2024-04-16T16:23:10.458496145Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 16 16:23:10 addons-012036 containerd[649]: time="2024-04-16T16:23:10.488765613Z" level=warning msg="cleanup warnings time=\"2024-04-16T16:23:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Apr 16 16:23:11 addons-012036 containerd[649]: time="2024-04-16T16:23:11.276133246Z" level=info msg="Finish port forwarding for \"56822ad5c90e2a5d5c4744ba861ef6872727ab2dcfe6fde64544f2096ee94b9a\" port 44134"
	Apr 16 16:23:11 addons-012036 containerd[649]: time="2024-04-16T16:23:11.599937542Z" level=info msg="StopPodSandbox for \"a06a639a22bde5f2e2f2a85badce3dfa03843cf8f8e158fbe635f4ceb195e3c1\""
	Apr 16 16:23:11 addons-012036 containerd[649]: time="2024-04-16T16:23:11.600191238Z" level=info msg="Container to stop \"4ce8234c78e56e739dd4a1c4dd38418eb2a57ffa8ecd9c21e0a9e8766c979468\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 16 16:23:11 addons-012036 containerd[649]: time="2024-04-16T16:23:11.672038788Z" level=info msg="shim disconnected" id=a06a639a22bde5f2e2f2a85badce3dfa03843cf8f8e158fbe635f4ceb195e3c1 namespace=k8s.io
	Apr 16 16:23:11 addons-012036 containerd[649]: time="2024-04-16T16:23:11.672138324Z" level=warning msg="cleaning up after shim disconnected" id=a06a639a22bde5f2e2f2a85badce3dfa03843cf8f8e158fbe635f4ceb195e3c1 namespace=k8s.io
	Apr 16 16:23:11 addons-012036 containerd[649]: time="2024-04-16T16:23:11.672154189Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 16 16:23:11 addons-012036 containerd[649]: time="2024-04-16T16:23:11.796361177Z" level=info msg="TearDown network for sandbox \"a06a639a22bde5f2e2f2a85badce3dfa03843cf8f8e158fbe635f4ceb195e3c1\" successfully"
	Apr 16 16:23:11 addons-012036 containerd[649]: time="2024-04-16T16:23:11.796593721Z" level=info msg="StopPodSandbox for \"a06a639a22bde5f2e2f2a85badce3dfa03843cf8f8e158fbe635f4ceb195e3c1\" returns successfully"
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.411822452Z" level=info msg="StopContainer for \"0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593\" with timeout 30 (s)"
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.413250644Z" level=info msg="Stop container \"0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593\" with signal terminated"
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.573069083Z" level=info msg="shim disconnected" id=0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593 namespace=k8s.io
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.573120965Z" level=warning msg="cleaning up after shim disconnected" id=0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593 namespace=k8s.io
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.573128733Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.629312572Z" level=info msg="StopContainer for \"0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593\" returns successfully"
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.630153556Z" level=info msg="StopPodSandbox for \"56822ad5c90e2a5d5c4744ba861ef6872727ab2dcfe6fde64544f2096ee94b9a\""
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.630252338Z" level=info msg="Container to stop \"0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.712137759Z" level=info msg="shim disconnected" id=56822ad5c90e2a5d5c4744ba861ef6872727ab2dcfe6fde64544f2096ee94b9a namespace=k8s.io
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.712217178Z" level=warning msg="cleaning up after shim disconnected" id=56822ad5c90e2a5d5c4744ba861ef6872727ab2dcfe6fde64544f2096ee94b9a namespace=k8s.io
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.712231768Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.861353580Z" level=info msg="TearDown network for sandbox \"56822ad5c90e2a5d5c4744ba861ef6872727ab2dcfe6fde64544f2096ee94b9a\" successfully"
	Apr 16 16:23:12 addons-012036 containerd[649]: time="2024-04-16T16:23:12.861425318Z" level=info msg="StopPodSandbox for \"56822ad5c90e2a5d5c4744ba861ef6872727ab2dcfe6fde64544f2096ee94b9a\" returns successfully"
	
	
	==> coredns [f7179288f854b31cc4cbdd569bfcd28c058e519f2bf3526e9928a17684729742] <==
	[INFO] 10.244.0.21:50027 - 47550 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000095186s
	[INFO] 10.244.0.21:50027 - 60337 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000145331s
	[INFO] 10.244.0.21:48395 - 33898 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098138s
	[INFO] 10.244.0.21:50027 - 27296 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104926s
	[INFO] 10.244.0.21:48395 - 49288 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103582s
	[INFO] 10.244.0.21:50027 - 35119 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135435s
	[INFO] 10.244.0.21:48395 - 39620 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057238s
	[INFO] 10.244.0.21:48395 - 25647 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100482s
	[INFO] 10.244.0.21:48395 - 14952 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008583s
	[INFO] 10.244.0.21:48395 - 9796 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046317s
	[INFO] 10.244.0.21:48395 - 62384 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000133254s
	[INFO] 10.244.0.21:55726 - 8178 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000162832s
	[INFO] 10.244.0.21:52877 - 53045 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000140694s
	[INFO] 10.244.0.21:52877 - 32168 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075802s
	[INFO] 10.244.0.21:55726 - 17601 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104234s
	[INFO] 10.244.0.21:52877 - 9948 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078815s
	[INFO] 10.244.0.21:55726 - 20915 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000218449s
	[INFO] 10.244.0.21:52877 - 34362 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077228s
	[INFO] 10.244.0.21:55726 - 11535 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062301s
	[INFO] 10.244.0.21:52877 - 36745 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000122211s
	[INFO] 10.244.0.21:55726 - 30739 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087967s
	[INFO] 10.244.0.21:55726 - 28953 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000184869s
	[INFO] 10.244.0.21:52877 - 29987 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00012142s
	[INFO] 10.244.0.21:55726 - 65482 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000248392s
	[INFO] 10.244.0.21:52877 - 6403 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000310284s
	
	
	==> describe nodes <==
	Name:               addons-012036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-012036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=addons-012036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_20_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-012036
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:20:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-012036
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:23:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:22:52 +0000   Tue, 16 Apr 2024 16:20:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:22:52 +0000   Tue, 16 Apr 2024 16:20:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:22:52 +0000   Tue, 16 Apr 2024 16:20:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:22:52 +0000   Tue, 16 Apr 2024 16:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    addons-012036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 708d4aa12b0c448c993837b39a2c42f7
	  System UUID:                708d4aa1-2b0c-448c-9938-37b39a2c42f7
	  Boot ID:                    879a873f-bc9d-45b9-9166-b9cec81a5e41
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.15
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-9rvj4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  gcp-auth                    gcp-auth-7d69788767-6prgz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  headlamp                    headlamp-5b77dbd7c4-z758s                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  ingress-nginx               ingress-nginx-controller-65496f9567-88dw2    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m1s
	  kube-system                 coredns-76f75df574-gl82p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m11s
	  kube-system                 etcd-addons-012036                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-apiserver-addons-012036                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-controller-manager-addons-012036        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-proxy-s6dq9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-scheduler-addons-012036                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 snapshot-controller-58dbcc7b99-dmcpx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 snapshot-controller-58dbcc7b99-wr6z2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  local-path-storage          local-path-provisioner-78b46b4d5c-fgvc7      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-knpbf               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m9s   kube-proxy       
	  Normal  Starting                 2m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m24s  kubelet          Node addons-012036 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m23s  kubelet          Node addons-012036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s  kubelet          Node addons-012036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s  kubelet          Node addons-012036 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m12s  node-controller  Node addons-012036 event: Registered Node addons-012036 in Controller
	
	
	==> dmesg <==
	[  +0.653985] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +5.040001] systemd-fstab-generator[865]: Ignoring "noauto" option for root device
	[  +0.059667] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.075584] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.684265] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[Apr16 16:21] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +0.156476] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.110160] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.176625] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.021541] kauditd_printk_skb: 136 callbacks suppressed
	[  +7.276636] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.291752] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.080528] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.788211] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.199215] kauditd_printk_skb: 31 callbacks suppressed
	[Apr16 16:22] kauditd_printk_skb: 67 callbacks suppressed
	[ +11.731376] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.500462] kauditd_printk_skb: 10 callbacks suppressed
	[ +10.786935] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.117266] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.009410] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.018005] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.292857] kauditd_printk_skb: 56 callbacks suppressed
	[Apr16 16:23] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.003257] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [085bd521d80e689ee6adf7cb8b640371281a985e7349716003c1f7dc08415dac] <==
	{"level":"info","ts":"2024-04-16T16:21:55.806515Z","caller":"traceutil/trace.go:171","msg":"trace[86216786] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:986; }","duration":"135.544279ms","start":"2024-04-16T16:21:55.670961Z","end":"2024-04-16T16:21:55.806505Z","steps":["trace[86216786] 'agreement among raft nodes before linearized reading'  (duration: 135.163376ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:21:55.80686Z","caller":"traceutil/trace.go:171","msg":"trace[1785917352] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"221.351087ms","start":"2024-04-16T16:21:55.585501Z","end":"2024-04-16T16:21:55.806852Z","steps":["trace[1785917352] 'process raft request'  (duration: 220.371284ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:21:57.074203Z","caller":"traceutil/trace.go:171","msg":"trace[684185531] transaction","detail":"{read_only:false; response_revision:1000; number_of_response:1; }","duration":"297.001124ms","start":"2024-04-16T16:21:56.777185Z","end":"2024-04-16T16:21:57.074186Z","steps":["trace[684185531] 'process raft request'  (duration: 296.704957ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:21:57.075215Z","caller":"traceutil/trace.go:171","msg":"trace[1690324089] linearizableReadLoop","detail":"{readStateIndex:1028; appliedIndex:1028; }","duration":"208.218349ms","start":"2024-04-16T16:21:56.866799Z","end":"2024-04-16T16:21:57.075018Z","steps":["trace[1690324089] 'read index received'  (duration: 208.211566ms)","trace[1690324089] 'applied index is now lower than readState.Index'  (duration: 5.768µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T16:21:57.076818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.486231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-04-16T16:21:57.077352Z","caller":"traceutil/trace.go:171","msg":"trace[348949161] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1000; }","duration":"167.048907ms","start":"2024-04-16T16:21:56.91026Z","end":"2024-04-16T16:21:57.077309Z","steps":["trace[348949161] 'agreement among raft nodes before linearized reading'  (duration: 165.485172ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:21:57.078561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.701517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-16T16:21:57.078917Z","caller":"traceutil/trace.go:171","msg":"trace[939727194] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:1000; }","duration":"212.120941ms","start":"2024-04-16T16:21:56.866786Z","end":"2024-04-16T16:21:57.078907Z","steps":["trace[939727194] 'agreement among raft nodes before linearized reading'  (duration: 211.681702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:01.644395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.645477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/gcp-auth-webhook-cfg\" ","response":"range_response_count:1 size:2695"}
	{"level":"info","ts":"2024-04-16T16:22:01.644495Z","caller":"traceutil/trace.go:171","msg":"trace[375487526] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/gcp-auth-webhook-cfg; range_end:; response_count:1; response_revision:1042; }","duration":"205.784243ms","start":"2024-04-16T16:22:01.438691Z","end":"2024-04-16T16:22:01.644475Z","steps":["trace[375487526] 'range keys from in-memory index tree'  (duration: 205.320252ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:01.645043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.430092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85732"}
	{"level":"info","ts":"2024-04-16T16:22:01.645122Z","caller":"traceutil/trace.go:171","msg":"trace[345045695] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1042; }","duration":"128.536208ms","start":"2024-04-16T16:22:01.516574Z","end":"2024-04-16T16:22:01.645111Z","steps":["trace[345045695] 'range keys from in-memory index tree'  (duration: 128.185653ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:18.205899Z","caller":"traceutil/trace.go:171","msg":"trace[1331830235] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1155; }","duration":"294.845542ms","start":"2024-04-16T16:22:17.911021Z","end":"2024-04-16T16:22:18.205867Z","steps":["trace[1331830235] 'read index received'  (duration: 294.508188ms)","trace[1331830235] 'applied index is now lower than readState.Index'  (duration: 336.55µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T16:22:18.206056Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.45033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.247\" ","response":"range_response_count:1 size:135"}
	{"level":"warn","ts":"2024-04-16T16:22:18.206058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.027616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14397"}
	{"level":"info","ts":"2024-04-16T16:22:18.206096Z","caller":"traceutil/trace.go:171","msg":"trace[1365484486] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1123; }","duration":"295.091169ms","start":"2024-04-16T16:22:17.910995Z","end":"2024-04-16T16:22:18.206087Z","steps":["trace[1365484486] 'agreement among raft nodes before linearized reading'  (duration: 294.975107ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:18.206132Z","caller":"traceutil/trace.go:171","msg":"trace[431793900] range","detail":"{range_begin:/registry/masterleases/192.168.39.247; range_end:; response_count:1; response_revision:1123; }","duration":"282.488416ms","start":"2024-04-16T16:22:17.923581Z","end":"2024-04-16T16:22:18.20607Z","steps":["trace[431793900] 'agreement among raft nodes before linearized reading'  (duration: 282.395538ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:18.206232Z","caller":"traceutil/trace.go:171","msg":"trace[1813769361] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"390.254571ms","start":"2024-04-16T16:22:17.815969Z","end":"2024-04-16T16:22:18.206224Z","steps":["trace[1813769361] 'process raft request'  (duration: 389.602067ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:18.206303Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T16:22:17.815941Z","time spent":"390.315425ms","remote":"127.0.0.1:56522","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":793,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-ps4j5.17c6cf272aec7b6b\" mod_revision:954 > success:<request_put:<key:\"/registry/events/gadget/gadget-ps4j5.17c6cf272aec7b6b\" value_size:722 lease:5145801175088306029 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-ps4j5.17c6cf272aec7b6b\" > >"}
	{"level":"warn","ts":"2024-04-16T16:22:18.206338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.958838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-04-16T16:22:18.206358Z","caller":"traceutil/trace.go:171","msg":"trace[43758733] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1123; }","duration":"148.004694ms","start":"2024-04-16T16:22:18.058348Z","end":"2024-04-16T16:22:18.206353Z","steps":["trace[43758733] 'agreement among raft nodes before linearized reading'  (duration: 147.934703ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:23.965517Z","caller":"traceutil/trace.go:171","msg":"trace[1324930744] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"104.133832ms","start":"2024-04-16T16:22:23.861368Z","end":"2024-04-16T16:22:23.965502Z","steps":["trace[1324930744] 'process raft request'  (duration: 103.393479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:38.257919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.363576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/gadget/kube-root-ca.crt\" ","response":"range_response_count:1 size:1740"}
	{"level":"info","ts":"2024-04-16T16:22:38.257965Z","caller":"traceutil/trace.go:171","msg":"trace[1838935135] range","detail":"{range_begin:/registry/configmaps/gadget/kube-root-ca.crt; range_end:; response_count:1; response_revision:1282; }","duration":"257.442522ms","start":"2024-04-16T16:22:38.000512Z","end":"2024-04-16T16:22:38.257955Z","steps":["trace[1838935135] 'range keys from in-memory index tree'  (duration: 257.248177ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:38.260001Z","caller":"traceutil/trace.go:171","msg":"trace[68077669] transaction","detail":"{read_only:false; response_revision:1283; number_of_response:1; }","duration":"101.801907ms","start":"2024-04-16T16:22:38.156701Z","end":"2024-04-16T16:22:38.258503Z","steps":["trace[68077669] 'process raft request'  (duration: 101.695054ms)"],"step_count":1}
	
	
	==> gcp-auth [64447d010527c0dc5f9323ff44c5d2b5e3dfa7a6f0799c0bc2e216129458b8e5] <==
	2024/04/16 16:22:25 GCP Auth Webhook started!
	2024/04/16 16:22:33 Ready to marshal response ...
	2024/04/16 16:22:33 Ready to write response ...
	2024/04/16 16:22:37 Ready to marshal response ...
	2024/04/16 16:22:37 Ready to write response ...
	2024/04/16 16:22:38 Ready to marshal response ...
	2024/04/16 16:22:38 Ready to write response ...
	2024/04/16 16:22:38 Ready to marshal response ...
	2024/04/16 16:22:38 Ready to write response ...
	2024/04/16 16:22:46 Ready to marshal response ...
	2024/04/16 16:22:46 Ready to write response ...
	2024/04/16 16:22:46 Ready to marshal response ...
	2024/04/16 16:22:46 Ready to write response ...
	2024/04/16 16:22:46 Ready to marshal response ...
	2024/04/16 16:22:46 Ready to write response ...
	2024/04/16 16:22:50 Ready to marshal response ...
	2024/04/16 16:22:50 Ready to write response ...
	2024/04/16 16:22:50 Ready to marshal response ...
	2024/04/16 16:22:50 Ready to write response ...
	2024/04/16 16:22:51 Ready to marshal response ...
	2024/04/16 16:22:51 Ready to write response ...
	2024/04/16 16:23:03 Ready to marshal response ...
	2024/04/16 16:23:03 Ready to write response ...
	2024/04/16 16:23:07 Ready to marshal response ...
	2024/04/16 16:23:07 Ready to write response ...
	
	
	==> kernel <==
	 16:23:13 up 3 min,  0 users,  load average: 2.93, 1.79, 0.73
	Linux addons-012036 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [48a1e53b66a23e7a0573e41068f9c5090d8c75c664d2ab30d4d01cf1368f5624] <==
	I0416 16:21:12.214586       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.104.55.36"}
	I0416 16:21:12.326920       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0416 16:21:14.390852       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.51.58"}
	I0416 16:21:14.413738       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0416 16:21:14.790329       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.34.184"}
	I0416 16:21:16.753839       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.13.69"}
	E0416 16:21:39.522595       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.221.162:443: connect: connection refused
	W0416 16:21:39.526682       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 16:21:39.527044       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0416 16:21:39.533150       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.221.162:443: connect: connection refused
	E0416 16:21:39.537215       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.221.162:443: connect: connection refused
	E0416 16:21:39.548663       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.221.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.221.162:443: connect: connection refused
	I0416 16:21:39.644154       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0416 16:22:32.825488       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0416 16:22:33.930159       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0416 16:22:40.537424       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0416 16:22:46.032156       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.47.222"}
	I0416 16:22:46.611543       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0416 16:22:50.600220       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0416 16:22:50.869192       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.177.157"}
	I0416 16:23:03.597194       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.196.211"}
	E0416 16:23:05.485289       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	E0416 16:23:06.550837       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0416 16:23:10.269561       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.247:8443->10.244.0.32:34218: read: connection reset by peer
	
	
	==> kube-controller-manager [87ef232e07b969d1694735212110e97ade6960347449a86c2ad23f48f519c049] <==
	I0416 16:22:49.546945       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0416 16:22:50.073396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5446596998" duration="8.764µs"
	I0416 16:22:50.651236       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0416 16:22:51.457965       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:22:51.458043       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0416 16:22:51.479994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="6.384µs"
	I0416 16:22:55.365527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="102.951µs"
	I0416 16:22:55.418298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="20.750065ms"
	I0416 16:22:55.418743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="138.746µs"
	I0416 16:23:01.487136       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0416 16:23:01.487218       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 16:23:01.886821       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0416 16:23:01.887251       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:23:03.350396       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0416 16:23:03.393050       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-9rvj4"
	I0416 16:23:03.416431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.998545ms"
	I0416 16:23:03.449953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="32.677936ms"
	I0416 16:23:03.454488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.413µs"
	I0416 16:23:05.200214       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0416 16:23:05.397596       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W0416 16:23:07.299220       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:23:07.299285       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0416 16:23:08.606083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="29.041176ms"
	I0416 16:23:08.607487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="1.198213ms"
	I0416 16:23:12.393858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="12.268µs"
	
	
	==> kube-proxy [b656b7633700bf469cfbf1a15cde28b6e1a8cd5e1f762666e40a4eda00022a63] <==
	I0416 16:21:03.527818       1 server_others.go:72] "Using iptables proxy"
	I0416 16:21:03.545534       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.247"]
	I0416 16:21:03.826107       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:21:03.826154       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:21:03.826167       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:21:04.000980       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:21:04.001190       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:21:04.001202       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:21:04.011675       1 config.go:188] "Starting service config controller"
	I0416 16:21:04.011699       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:21:04.011721       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:21:04.011724       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:21:04.012203       1 config.go:315] "Starting node config controller"
	I0416 16:21:04.012210       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:21:04.113447       1 shared_informer.go:318] Caches are synced for node config
	I0416 16:21:04.113497       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:21:04.113575       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [24af4e069b22ff8e362e59eeacad22818e447bc78b5e86e5ede0b4994edf7fc7] <==
	W0416 16:20:46.142313       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:20:46.142320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:20:46.142506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 16:20:46.142549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 16:20:46.142600       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:20:46.142669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:20:46.975914       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:20:46.975973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:20:46.998830       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:20:46.998860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:20:47.073308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:47.073380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:47.162376       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:47.162453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:47.244816       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:20:47.244852       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:20:47.357677       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:20:47.357979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:20:47.474088       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:20:47.474152       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:20:47.481971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:47.482032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:47.489004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:20:47.489072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0416 16:20:50.207893       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.538957    1246 scope.go:117] "RemoveContainer" containerID="b599890fb9c034735fb9f5964f815268eb07ef30ac8729517f00fa72d6109696"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.577951    1246 scope.go:117] "RemoveContainer" containerID="6dcb42bc8b7b8829634f03ba603a3768ca32d9af9abd13a9147ddb3658c72b8f"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.612385    1246 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.639945    1246 scope.go:117] "RemoveContainer" containerID="2a1e553761953c4abc10789770687355ba5d2a4b6d770e53e35ebf1b3aa0bb96"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.659525    1246 scope.go:117] "RemoveContainer" containerID="b6246028475c81eacba55f063c21da9c4c960dd83314f5fd9af2137d2835d32c"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.727837    1246 scope.go:117] "RemoveContainer" containerID="9ed676fde39246300ab97468d5587ac60c654caf1552c5e83d60f7b7cfe1aef7"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.783951    1246 scope.go:117] "RemoveContainer" containerID="704964b5972d3df0f8969e1a7e6b99625e92d3a7f3204a05b89853be082a5271"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.808912    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a4dcb7-fc8d-45d7-912a-052b70ffedea" path="/var/lib/kubelet/pods/60a4dcb7-fc8d-45d7-912a-052b70ffedea/volumes"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.809527    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6942c4bf-39db-43ca-bf0e-52f91546c9da" path="/var/lib/kubelet/pods/6942c4bf-39db-43ca-bf0e-52f91546c9da/volumes"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.810499    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed11f0c4-aade-4f74-ae20-250260b20010" path="/var/lib/kubelet/pods/ed11f0c4-aade-4f74-ae20-250260b20010/volumes"
	Apr 16 16:23:07 addons-012036 kubelet[1246]: I0416 16:23:07.837801    1246 scope.go:117] "RemoveContainer" containerID="93b2144a44fd3d16f144ceb35f7e69404f5c020aed2b91f2a1934de6fefc1859"
	Apr 16 16:23:10 addons-012036 kubelet[1246]: I0416 16:23:10.613175    1246 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-9rvj4" podStartSLOduration=4.219337311 podStartE2EDuration="7.613134203s" podCreationTimestamp="2024-04-16 16:23:03 +0000 UTC" firstStartedPulling="2024-04-16 16:23:04.066505234 +0000 UTC m=+134.493414107" lastFinishedPulling="2024-04-16 16:23:07.460302124 +0000 UTC m=+137.887210999" observedRunningTime="2024-04-16 16:23:08.577324656 +0000 UTC m=+139.004233533" watchObservedRunningTime="2024-04-16 16:23:10.613134203 +0000 UTC m=+141.040043096"
	Apr 16 16:23:11 addons-012036 kubelet[1246]: I0416 16:23:11.915797    1246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkzlv\" (UniqueName: \"kubernetes.io/projected/4ad3331b-c57e-449b-a159-9d2f3a9ecabf-kube-api-access-wkzlv\") pod \"4ad3331b-c57e-449b-a159-9d2f3a9ecabf\" (UID: \"4ad3331b-c57e-449b-a159-9d2f3a9ecabf\") "
	Apr 16 16:23:11 addons-012036 kubelet[1246]: I0416 16:23:11.926474    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad3331b-c57e-449b-a159-9d2f3a9ecabf-kube-api-access-wkzlv" (OuterVolumeSpecName: "kube-api-access-wkzlv") pod "4ad3331b-c57e-449b-a159-9d2f3a9ecabf" (UID: "4ad3331b-c57e-449b-a159-9d2f3a9ecabf"). InnerVolumeSpecName "kube-api-access-wkzlv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 16 16:23:12 addons-012036 kubelet[1246]: I0416 16:23:12.016259    1246 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wkzlv\" (UniqueName: \"kubernetes.io/projected/4ad3331b-c57e-449b-a159-9d2f3a9ecabf-kube-api-access-wkzlv\") on node \"addons-012036\" DevicePath \"\""
	Apr 16 16:23:12 addons-012036 kubelet[1246]: I0416 16:23:12.605365    1246 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06a639a22bde5f2e2f2a85badce3dfa03843cf8f8e158fbe635f4ceb195e3c1"
	Apr 16 16:23:13 addons-012036 kubelet[1246]: I0416 16:23:13.026007    1246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74msz\" (UniqueName: \"kubernetes.io/projected/fa15f4cf-8401-4c01-8f66-8e92e3945327-kube-api-access-74msz\") pod \"fa15f4cf-8401-4c01-8f66-8e92e3945327\" (UID: \"fa15f4cf-8401-4c01-8f66-8e92e3945327\") "
	Apr 16 16:23:13 addons-012036 kubelet[1246]: I0416 16:23:13.035502    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa15f4cf-8401-4c01-8f66-8e92e3945327-kube-api-access-74msz" (OuterVolumeSpecName: "kube-api-access-74msz") pod "fa15f4cf-8401-4c01-8f66-8e92e3945327" (UID: "fa15f4cf-8401-4c01-8f66-8e92e3945327"). InnerVolumeSpecName "kube-api-access-74msz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 16 16:23:13 addons-012036 kubelet[1246]: I0416 16:23:13.128211    1246 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-74msz\" (UniqueName: \"kubernetes.io/projected/fa15f4cf-8401-4c01-8f66-8e92e3945327-kube-api-access-74msz\") on node \"addons-012036\" DevicePath \"\""
	Apr 16 16:23:13 addons-012036 kubelet[1246]: I0416 16:23:13.611563    1246 scope.go:117] "RemoveContainer" containerID="0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593"
	Apr 16 16:23:13 addons-012036 kubelet[1246]: I0416 16:23:13.638731    1246 scope.go:117] "RemoveContainer" containerID="0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593"
	Apr 16 16:23:13 addons-012036 kubelet[1246]: E0416 16:23:13.640033    1246 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593\": not found" containerID="0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593"
	Apr 16 16:23:13 addons-012036 kubelet[1246]: I0416 16:23:13.640077    1246 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593"} err="failed to get container status \"0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593\": rpc error: code = NotFound desc = an error occurred when try to find container \"0830eb1f2606b471a7051067cff21fb152f69c77aee9b83b37f9b569f587e593\": not found"
	Apr 16 16:23:13 addons-012036 kubelet[1246]: I0416 16:23:13.808539    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad3331b-c57e-449b-a159-9d2f3a9ecabf" path="/var/lib/kubelet/pods/4ad3331b-c57e-449b-a159-9d2f3a9ecabf/volumes"
	Apr 16 16:23:13 addons-012036 kubelet[1246]: I0416 16:23:13.809034    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa15f4cf-8401-4c01-8f66-8e92e3945327" path="/var/lib/kubelet/pods/fa15f4cf-8401-4c01-8f66-8e92e3945327/volumes"
	
	
	==> storage-provisioner [d754b9971ad2d2f5a7e70ad479abc97438d830807c7537054de9f14cdb834409] <==
	I0416 16:21:14.815389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 16:21:15.055503       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 16:21:15.055542       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 16:21:15.326900       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 16:21:15.370700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c9135d2-e05f-4353-8901-9f73315b8088", APIVersion:"v1", ResourceVersion:"786", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-012036_1b1199e4-ea3b-4fe6-b1e1-976f51e3b165 became leader
	I0416 16:21:15.370988       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-012036_1b1199e4-ea3b-4fe6-b1e1-976f51e3b165!
	I0416 16:21:15.572726       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-012036_1b1199e4-ea3b-4fe6-b1e1-976f51e3b165!
	E0416 16:22:50.489301       1 controller.go:1050] claim "8f41ec9b-ffc7-4a6a-90f0-74da7d87242a" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-012036 -n addons-012036
helpers_test.go:261: (dbg) Run:  kubectl --context addons-012036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zpdtd ingress-nginx-admission-patch-kscqv
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-012036 describe pod ingress-nginx-admission-create-zpdtd ingress-nginx-admission-patch-kscqv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-012036 describe pod ingress-nginx-admission-create-zpdtd ingress-nginx-admission-patch-kscqv: exit status 1 (60.044973ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zpdtd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kscqv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-012036 describe pod ingress-nginx-admission-create-zpdtd ingress-nginx-admission-patch-kscqv: exit status 1
--- FAIL: TestAddons/parallel/CSI (48.34s)

                                                
                                    

Test pass (292/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.06
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.29.3/json-events 4.27
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.07
18 TestDownloadOnly/v1.29.3/DeleteAll 0.14
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-rc.2/json-events 3.96
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 130.18
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 146.67
38 TestAddons/parallel/Registry 17.09
40 TestAddons/parallel/InspektorGadget 12.43
41 TestAddons/parallel/MetricsServer 7.26
42 TestAddons/parallel/HelmTiller 10.49
45 TestAddons/parallel/Headlamp 17.27
46 TestAddons/parallel/CloudSpanner 6.98
47 TestAddons/parallel/LocalPath 55.84
48 TestAddons/parallel/NvidiaDevicePlugin 6.53
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 92.81
54 TestCertOptions 49.67
55 TestCertExpiration 280.16
57 TestForceSystemdFlag 112.71
58 TestForceSystemdEnv 77.45
60 TestKVMDriverInstallOrUpdate 3.29
64 TestErrorSpam/setup 48.22
65 TestErrorSpam/start 0.39
66 TestErrorSpam/status 0.8
67 TestErrorSpam/pause 1.75
68 TestErrorSpam/unpause 1.93
69 TestErrorSpam/stop 5.06
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 102.77
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 46.27
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.94
81 TestFunctional/serial/CacheCmd/cache/add_local 1.8
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 43.24
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.69
92 TestFunctional/serial/LogsFileCmd 1.69
93 TestFunctional/serial/InvalidService 4.59
95 TestFunctional/parallel/ConfigCmd 0.46
96 TestFunctional/parallel/DashboardCmd 17.03
97 TestFunctional/parallel/DryRun 0.36
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 1.11
103 TestFunctional/parallel/ServiceCmdConnect 8.66
104 TestFunctional/parallel/AddonsCmd 0.34
105 TestFunctional/parallel/PersistentVolumeClaim 34.67
107 TestFunctional/parallel/SSHCmd 0.52
108 TestFunctional/parallel/CpCmd 1.56
109 TestFunctional/parallel/MySQL 39.95
110 TestFunctional/parallel/FileSync 0.26
111 TestFunctional/parallel/CertSync 1.67
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
119 TestFunctional/parallel/License 0.23
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.27
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
122 TestFunctional/parallel/MountCmd/any-port 9.67
123 TestFunctional/parallel/ProfileCmd/profile_list 0.33
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
128 TestFunctional/parallel/MountCmd/specific-port 2.06
129 TestFunctional/parallel/ServiceCmd/List 0.88
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.96
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
133 TestFunctional/parallel/ServiceCmd/Format 0.33
134 TestFunctional/parallel/ServiceCmd/URL 0.42
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
148 TestFunctional/parallel/ImageCommands/ImageBuild 2.79
149 TestFunctional/parallel/ImageCommands/Setup 0.88
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.46
151 TestFunctional/parallel/Version/short 0.07
152 TestFunctional/parallel/Version/components 0.77
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.68
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.62
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.55
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.92
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.67
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/StartCluster 207.96
166 TestMultiControlPlane/serial/DeployApp 4.89
167 TestMultiControlPlane/serial/PingHostFromPods 1.43
168 TestMultiControlPlane/serial/AddWorkerNode 45.91
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.61
171 TestMultiControlPlane/serial/CopyFile 14.3
172 TestMultiControlPlane/serial/StopSecondaryNode 93.19
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.42
174 TestMultiControlPlane/serial/RestartSecondaryNode 44.22
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.58
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 433.56
177 TestMultiControlPlane/serial/DeleteSecondaryNode 7.45
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
179 TestMultiControlPlane/serial/StopCluster 276.55
180 TestMultiControlPlane/serial/RestartCluster 161.52
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
182 TestMultiControlPlane/serial/AddSecondaryNode 73.8
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.6
187 TestJSONOutput/start/Command 74.84
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.78
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.72
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.34
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 96.86
219 TestMountStart/serial/StartWithMountFirst 30.38
220 TestMountStart/serial/VerifyMountFirst 0.41
221 TestMountStart/serial/StartWithMountSecond 27.77
222 TestMountStart/serial/VerifyMountSecond 0.4
223 TestMountStart/serial/DeleteFirst 0.93
224 TestMountStart/serial/VerifyMountPostDelete 0.41
225 TestMountStart/serial/Stop 1.74
226 TestMountStart/serial/RestartStopped 22.5
227 TestMountStart/serial/VerifyMountPostStop 0.4
230 TestMultiNode/serial/FreshStart2Nodes 106.02
231 TestMultiNode/serial/DeployApp2Nodes 4.16
232 TestMultiNode/serial/PingHostFrom2Pods 0.93
233 TestMultiNode/serial/AddNode 43.83
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.97
237 TestMultiNode/serial/StopNode 2.46
238 TestMultiNode/serial/StartAfterStop 28.74
239 TestMultiNode/serial/RestartKeepsNodes 301.36
240 TestMultiNode/serial/DeleteNode 2.33
241 TestMultiNode/serial/StopMultiNode 184.19
242 TestMultiNode/serial/RestartMultiNode 82.44
243 TestMultiNode/serial/ValidateNameConflict 50.13
248 TestPreload 233.88
250 TestScheduledStopUnix 120.06
254 TestRunningBinaryUpgrade 243.72
256 TestKubernetesUpgrade 234.98
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
268 TestPause/serial/Start 102.95
269 TestNoKubernetes/serial/StartWithK8s 105.91
270 TestPause/serial/SecondStartNoReconfiguration 45.47
271 TestNoKubernetes/serial/StartWithStopK8s 17.78
272 TestNoKubernetes/serial/Start 29.95
273 TestPause/serial/Pause 0.87
274 TestPause/serial/VerifyStatus 0.29
275 TestPause/serial/Unpause 0.78
276 TestPause/serial/PauseAgain 0.99
277 TestPause/serial/DeletePaused 1.44
278 TestPause/serial/VerifyDeletedResources 0.44
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
280 TestNoKubernetes/serial/ProfileList 0.92
281 TestNoKubernetes/serial/Stop 1.57
282 TestNoKubernetes/serial/StartNoArgs 74.77
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
291 TestNetworkPlugins/group/false 3.96
292 TestStoppedBinaryUpgrade/Setup 0.48
293 TestStoppedBinaryUpgrade/Upgrade 156.7
298 TestStartStop/group/old-k8s-version/serial/FirstStart 179.32
299 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
301 TestStartStop/group/no-preload/serial/FirstStart 131.07
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 130.67
305 TestStartStop/group/newest-cni/serial/FirstStart 61.33
306 TestStartStop/group/no-preload/serial/DeployApp 8.34
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
308 TestStartStop/group/no-preload/serial/Stop 92.55
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
310 TestStartStop/group/old-k8s-version/serial/DeployApp 7.48
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.67
313 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
314 TestStartStop/group/old-k8s-version/serial/Stop 92.72
315 TestStartStop/group/newest-cni/serial/DeployApp 0
316 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
317 TestStartStop/group/newest-cni/serial/Stop 7.35
318 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
319 TestStartStop/group/newest-cni/serial/SecondStart 35.24
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
323 TestStartStop/group/newest-cni/serial/Pause 2.88
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
325 TestStartStop/group/no-preload/serial/SecondStart 319.8
327 TestStartStop/group/embed-certs/serial/FirstStart 116.74
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 355.66
330 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
331 TestStartStop/group/old-k8s-version/serial/SecondStart 646.05
332 TestStartStop/group/embed-certs/serial/DeployApp 8.33
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
334 TestStartStop/group/embed-certs/serial/Stop 92.54
335 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
336 TestStartStop/group/embed-certs/serial/SecondStart 299.19
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 26.01
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
340 TestStartStop/group/no-preload/serial/Pause 3.1
341 TestNetworkPlugins/group/auto/Start 101.78
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.52
346 TestNetworkPlugins/group/kindnet/Start 67.17
347 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
348 TestNetworkPlugins/group/auto/KubeletFlags 0.23
349 TestNetworkPlugins/group/auto/NetCatPod 10.27
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
351 TestNetworkPlugins/group/kindnet/NetCatPod 9.25
352 TestNetworkPlugins/group/auto/DNS 0.18
353 TestNetworkPlugins/group/auto/Localhost 0.15
354 TestNetworkPlugins/group/auto/HairPin 0.17
355 TestNetworkPlugins/group/kindnet/DNS 0.19
356 TestNetworkPlugins/group/kindnet/Localhost 0.17
357 TestNetworkPlugins/group/kindnet/HairPin 0.17
358 TestNetworkPlugins/group/calico/Start 99.11
359 TestNetworkPlugins/group/custom-flannel/Start 109.96
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.16
362 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
363 TestStartStop/group/embed-certs/serial/Pause 3.35
364 TestNetworkPlugins/group/enable-default-cni/Start 73.92
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.28
367 TestNetworkPlugins/group/calico/NetCatPod 10.28
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.38
370 TestNetworkPlugins/group/calico/DNS 0.22
371 TestNetworkPlugins/group/calico/Localhost 0.19
372 TestNetworkPlugins/group/calico/HairPin 0.18
373 TestNetworkPlugins/group/custom-flannel/DNS 0.22
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
378 TestNetworkPlugins/group/flannel/Start 88.61
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
382 TestNetworkPlugins/group/bridge/Start 122.16
383 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
385 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
386 TestStartStop/group/old-k8s-version/serial/Pause 3.09
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
389 TestNetworkPlugins/group/flannel/NetCatPod 9.24
390 TestNetworkPlugins/group/flannel/DNS 0.19
391 TestNetworkPlugins/group/flannel/Localhost 0.16
392 TestNetworkPlugins/group/flannel/HairPin 0.14
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
394 TestNetworkPlugins/group/bridge/NetCatPod 9.25
395 TestNetworkPlugins/group/bridge/DNS 0.17
396 TestNetworkPlugins/group/bridge/Localhost 0.14
397 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (9.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-253269 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-253269 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (9.056259026s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-253269
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-253269: exit status 85 (81.583538ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-253269 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |          |
	|         | -p download-only-253269        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:40
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:40.119688   10965 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:40.119818   10965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:40.119830   10965 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:40.119834   10965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:40.120032   10965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	W0416 16:19:40.120166   10965 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18649-3613/.minikube/config/config.json: open /home/jenkins/minikube-integration/18649-3613/.minikube/config/config.json: no such file or directory
	I0416 16:19:40.120775   10965 out.go:298] Setting JSON to true
	I0416 16:19:40.121738   10965 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":130,"bootTime":1713284250,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:40.121812   10965 start.go:139] virtualization: kvm guest
	I0416 16:19:40.124506   10965 out.go:97] [download-only-253269] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	W0416 16:19:40.124634   10965 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18649-3613/.minikube/cache/preloaded-tarball: no such file or directory
	I0416 16:19:40.124726   10965 notify.go:220] Checking for updates...
	I0416 16:19:40.126237   10965 out.go:169] MINIKUBE_LOCATION=18649
	I0416 16:19:40.128102   10965 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:40.129929   10965 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:19:40.131661   10965 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:19:40.133386   10965 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0416 16:19:40.136717   10965 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0416 16:19:40.137013   10965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:19:40.249741   10965 out.go:97] Using the kvm2 driver based on user configuration
	I0416 16:19:40.249776   10965 start.go:297] selected driver: kvm2
	I0416 16:19:40.249784   10965 start.go:901] validating driver "kvm2" against <nil>
	I0416 16:19:40.250118   10965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:40.250278   10965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3613/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:19:40.266067   10965 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:19:40.266161   10965 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:19:40.266666   10965 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0416 16:19:40.266851   10965 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0416 16:19:40.266925   10965 cni.go:84] Creating CNI manager for ""
	I0416 16:19:40.266943   10965 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0416 16:19:40.266956   10965 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 16:19:40.267017   10965 start.go:340] cluster config:
	{Name:download-only-253269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-253269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:19:40.267257   10965 iso.go:125] acquiring lock: {Name:mk70afca65b055481b04a6db2c93574dfae6043a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:40.269581   10965 out.go:97] Downloading VM boot image ...
	I0416 16:19:40.269650   10965 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18649-3613/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:19:43.059461   10965 out.go:97] Starting "download-only-253269" primary control-plane node in "download-only-253269" cluster
	I0416 16:19:43.059491   10965 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0416 16:19:43.085812   10965 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0416 16:19:43.085852   10965 cache.go:56] Caching tarball of preloaded images
	I0416 16:19:43.086031   10965 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0416 16:19:43.088238   10965 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0416 16:19:43.088270   10965 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0416 16:19:43.116955   10965 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18649-3613/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-253269 host does not exist
	  To start a cluster, run: "minikube start -p download-only-253269"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-253269
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (4.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-310063 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-310063 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (4.272681499s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (4.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-310063
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-310063: exit status 85 (71.680988ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-253269 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-253269        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-253269        | download-only-253269 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | -o=json --download-only        | download-only-310063 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-310063        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:49
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:49.545295   11151 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:49.545550   11151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:49.545560   11151 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:49.545565   11151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:49.545772   11151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:19:49.546359   11151 out.go:298] Setting JSON to true
	I0416 16:19:49.547192   11151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":140,"bootTime":1713284250,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:49.547262   11151 start.go:139] virtualization: kvm guest
	I0416 16:19:49.549570   11151 out.go:97] [download-only-310063] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:19:49.551341   11151 out.go:169] MINIKUBE_LOCATION=18649
	I0416 16:19:49.549744   11151 notify.go:220] Checking for updates...
	I0416 16:19:49.553141   11151 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:49.554704   11151 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:19:49.556264   11151 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:19:49.558015   11151 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-310063 host does not exist
	  To start a cluster, run: "minikube start -p download-only-310063"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-310063
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (3.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-220331 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-220331 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (3.962413235s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (3.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-220331
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-220331: exit status 85 (74.969134ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-253269 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-253269           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-253269           | download-only-253269 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | -o=json --download-only           | download-only-310063 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-310063           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-310063           | download-only-310063 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | -o=json --download-only           | download-only-220331 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-220331           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:54
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:54.166029   11315 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:54.166315   11315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:54.166326   11315 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:54.166331   11315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:54.166527   11315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:19:54.167124   11315 out.go:298] Setting JSON to true
	I0416 16:19:54.168012   11315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":144,"bootTime":1713284250,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:54.168079   11315 start.go:139] virtualization: kvm guest
	I0416 16:19:54.170464   11315 out.go:97] [download-only-220331] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:19:54.172438   11315 out.go:169] MINIKUBE_LOCATION=18649
	I0416 16:19:54.170706   11315 notify.go:220] Checking for updates...
	I0416 16:19:54.175603   11315 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:54.177258   11315 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:19:54.178796   11315 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:19:54.180304   11315 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-220331 host does not exist
	  To start a cluster, run: "minikube start -p download-only-220331"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-220331
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-437913 --alsologtostderr --binary-mirror http://127.0.0.1:33293 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-437913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-437913
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (130.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-024465 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-024465 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m9.022402873s)
helpers_test.go:175: Cleaning up "offline-containerd-024465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-024465
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-024465: (1.153304544s)
--- PASS: TestOffline (130.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-012036
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-012036: exit status 85 (62.327318ms)

                                                
                                                
-- stdout --
	* Profile "addons-012036" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-012036"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-012036
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-012036: exit status 85 (63.752012ms)

                                                
                                                
-- stdout --
	* Profile "addons-012036" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-012036"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (146.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-012036 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-012036 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.66905975s)
--- PASS: TestAddons/Setup (146.67s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 31.340747ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jcxdc" [b635d906-6cfa-4550-af73-b2a6efeed3a1] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006085454s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vnvqm" [337f4757-d2bc-47a6-a02c-27da4429dc2b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005762724s
addons_test.go:340: (dbg) Run:  kubectl --context addons-012036 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-012036 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-012036 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.085921848s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 ip
2024/04/16 16:22:42 [DEBUG] GET http://192.168.39.247:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ps4j5" [a9bcbe5d-8838-47a8-9f3a-f5484f15cc4d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005021813s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-012036
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-012036: (6.420697936s)
--- PASS: TestAddons/parallel/InspektorGadget (12.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 32.277537ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-rh5ch" [dd9e68e9-89db-492e-b995-43adcef90c7b] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007038422s
addons_test.go:415: (dbg) Run:  kubectl --context addons-012036 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-012036 addons disable metrics-server --alsologtostderr -v=1: (1.151242236s)
--- PASS: TestAddons/parallel/MetricsServer (7.26s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.49s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.630707ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-jqj87" [fa15f4cf-8401-4c01-8f66-8e92e3945327] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00626658s
addons_test.go:473: (dbg) Run:  kubectl --context addons-012036 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-012036 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.63443151s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-012036 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-012036 --alsologtostderr -v=1: (1.257636839s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-z758s" [9b92cf35-82db-402b-81f9-a2b3f3a432a1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-z758s" [9b92cf35-82db-402b-81f9-a2b3f3a432a1] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.006218724s
--- PASS: TestAddons/parallel/Headlamp (17.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-xgq89" [79bd4630-5f82-43a6-9b19-b49474cba687] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007439125s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-012036
--- PASS: TestAddons/parallel/CloudSpanner (6.98s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-012036 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-012036 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1ff65766-e944-41ca-8570-aebaf9ee1adc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1ff65766-e944-41ca-8570-aebaf9ee1adc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1ff65766-e944-41ca-8570-aebaf9ee1adc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005123295s
addons_test.go:891: (dbg) Run:  kubectl --context addons-012036 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 ssh "cat /opt/local-path-provisioner/pvc-8f41ec9b-ffc7-4a6a-90f0-74da7d87242a_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-012036 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-012036 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-012036 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-012036 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.84725086s)
--- PASS: TestAddons/parallel/LocalPath (55.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nwsz2" [c725f54f-6971-493f-bfd5-62cf6aec55cd] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005844618s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-012036
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-knpbf" [da84c797-cb39-4573-99c0-1a62b15f939a] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.012704238s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-012036 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-012036 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.81s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-012036
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-012036: (1m32.496156127s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-012036
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-012036
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-012036
--- PASS: TestAddons/StoppedEnableDisable (92.81s)

                                                
                                    
x
+
TestCertOptions (49.67s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-469496 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-469496 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (48.156443718s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-469496 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-469496 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-469496 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-469496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-469496
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-469496: (1.016122017s)
--- PASS: TestCertOptions (49.67s)

                                                
                                    
x
+
TestCertExpiration (280.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-359632 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-359632 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m24.064549889s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-359632 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-359632 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (15.062363532s)
helpers_test.go:175: Cleaning up "cert-expiration-359632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-359632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-359632: (1.030289672s)
--- PASS: TestCertExpiration (280.16s)

                                                
                                    
x
+
TestForceSystemdFlag (112.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-749398 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-749398 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m51.246494796s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-749398 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-749398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-749398
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-749398: (1.185669639s)
--- PASS: TestForceSystemdFlag (112.71s)

                                                
                                    
x
+
TestForceSystemdEnv (77.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-649426 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-649426 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m15.510574873s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-649426 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-649426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-649426
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-649426: (1.703086549s)
--- PASS: TestForceSystemdEnv (77.45s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.29s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.29s)

                                                
                                    
x
+
TestErrorSpam/setup (48.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-193008 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-193008 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-193008 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-193008 --driver=kvm2  --container-runtime=containerd: (48.219584697s)
--- PASS: TestErrorSpam/setup (48.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (5.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 stop: (2.315963727s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 stop: (1.198087456s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-193008 --log_dir /tmp/nospam-193008 stop: (1.546350915s)
--- PASS: TestErrorSpam/stop (5.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18649-3613/.minikube/files/etc/test/nested/copy/10952/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (102.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-505303 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0416 16:27:26.149789   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:26.155895   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:26.166180   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:26.186575   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:26.227000   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:26.307395   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:26.467907   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:26.788405   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:27.428663   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:28.709180   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:31.270190   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:36.391403   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:27:46.631674   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-505303 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m42.764866013s)
--- PASS: TestFunctional/serial/StartWithProxy (102.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-505303 --alsologtostderr -v=8
E0416 16:28:07.112401   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:28:48.072985   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-505303 --alsologtostderr -v=8: (46.273211846s)
functional_test.go:659: soft start took 46.273866539s for "functional-505303" cluster.
--- PASS: TestFunctional/serial/SoftStart (46.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-505303 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 cache add registry.k8s.io/pause:3.1: (1.308995899s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 cache add registry.k8s.io/pause:3.3: (1.298353049s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 cache add registry.k8s.io/pause:latest: (1.334281011s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-505303 /tmp/TestFunctionalserialCacheCmdcacheadd_local3193236279/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cache add minikube-local-cache-test:functional-505303
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 cache add minikube-local-cache-test:functional-505303: (1.370519188s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cache delete minikube-local-cache-test:functional-505303
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-505303
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (233.194473ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 cache reload: (1.189981516s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 kubectl -- --context functional-505303 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-505303 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-505303 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-505303 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.242366024s)
functional_test.go:757: restart took 43.242517306s for "functional-505303" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-505303 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 logs: (1.690460221s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 logs --file /tmp/TestFunctionalserialLogsFileCmd2896026074/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 logs --file /tmp/TestFunctionalserialLogsFileCmd2896026074/001/logs.txt: (1.684681574s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-505303 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-505303
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-505303: exit status 115 (315.643615ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.237:30554 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-505303 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-505303 delete -f testdata/invalidsvc.yaml: (1.049257679s)
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 config get cpus: exit status 14 (70.338198ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 config get cpus: exit status 14 (71.04471ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-505303 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-505303 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 18683: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-505303 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-505303 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (172.961176ms)

                                                
                                                
-- stdout --
	* [functional-505303] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:29:54.513001   18317 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:29:54.513127   18317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:29:54.513145   18317 out.go:304] Setting ErrFile to fd 2...
	I0416 16:29:54.513153   18317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:29:54.514127   18317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:29:54.514774   18317 out.go:298] Setting JSON to false
	I0416 16:29:54.515835   18317 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":745,"bootTime":1713284250,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:29:54.515911   18317 start.go:139] virtualization: kvm guest
	I0416 16:29:54.518533   18317 out.go:177] * [functional-505303] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:29:54.520545   18317 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:29:54.520566   18317 notify.go:220] Checking for updates...
	I0416 16:29:54.522306   18317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:29:54.524352   18317 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:29:54.526039   18317 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:29:54.527791   18317 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:29:54.529493   18317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:29:54.531728   18317 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:29:54.532475   18317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:29:54.532547   18317 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:29:54.549126   18317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I0416 16:29:54.549635   18317 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:29:54.550325   18317 main.go:141] libmachine: Using API Version  1
	I0416 16:29:54.550363   18317 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:29:54.550800   18317 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:29:54.551009   18317 main.go:141] libmachine: (functional-505303) Calling .DriverName
	I0416 16:29:54.551353   18317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:29:54.551806   18317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:29:54.551868   18317 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:29:54.568380   18317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36227
	I0416 16:29:54.568840   18317 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:29:54.569412   18317 main.go:141] libmachine: Using API Version  1
	I0416 16:29:54.569438   18317 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:29:54.569821   18317 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:29:54.570074   18317 main.go:141] libmachine: (functional-505303) Calling .DriverName
	I0416 16:29:54.610358   18317 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 16:29:54.612006   18317 start.go:297] selected driver: kvm2
	I0416 16:29:54.612033   18317 start.go:901] validating driver "kvm2" against &{Name:functional-505303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-505303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:29:54.612167   18317 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:29:54.614695   18317 out.go:177] 
	W0416 16:29:54.616395   18317 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0416 16:29:54.617884   18317 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-505303 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-505303 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-505303 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (184.635326ms)

                                                
                                                
-- stdout --
	* [functional-505303] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:29:54.325481   18238 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:29:54.325623   18238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:29:54.325662   18238 out.go:304] Setting ErrFile to fd 2...
	I0416 16:29:54.325678   18238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:29:54.326105   18238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:29:54.326854   18238 out.go:298] Setting JSON to false
	I0416 16:29:54.327967   18238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":744,"bootTime":1713284250,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:29:54.328034   18238 start.go:139] virtualization: kvm guest
	I0416 16:29:54.330559   18238 out.go:177] * [functional-505303] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0416 16:29:54.332524   18238 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:29:54.332578   18238 notify.go:220] Checking for updates...
	I0416 16:29:54.335699   18238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:29:54.337228   18238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 16:29:54.338820   18238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 16:29:54.340544   18238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:29:54.342091   18238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:29:54.344151   18238 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:29:54.344885   18238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:29:54.344940   18238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:29:54.377868   18238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0416 16:29:54.378348   18238 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:29:54.378887   18238 main.go:141] libmachine: Using API Version  1
	I0416 16:29:54.378910   18238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:29:54.379374   18238 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:29:54.379590   18238 main.go:141] libmachine: (functional-505303) Calling .DriverName
	I0416 16:29:54.379895   18238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:29:54.380199   18238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:29:54.380254   18238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:29:54.396347   18238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0416 16:29:54.396841   18238 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:29:54.397345   18238 main.go:141] libmachine: Using API Version  1
	I0416 16:29:54.397369   18238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:29:54.397753   18238 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:29:54.397978   18238 main.go:141] libmachine: (functional-505303) Calling .DriverName
	I0416 16:29:54.436150   18238 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0416 16:29:54.437700   18238 start.go:297] selected driver: kvm2
	I0416 16:29:54.437722   18238 start.go:901] validating driver "kvm2" against &{Name:functional-505303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-505303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:29:54.437871   18238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:29:54.440761   18238 out.go:177] 
	W0416 16:29:54.442287   18238 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0416 16:29:54.443788   18238 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-505303 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-505303 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-dwjkd" [f9413cf9-f098-4837-9ae0-be8188efc45d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-dwjkd" [f9413cf9-f098-4837-9ae0-be8188efc45d] Running
E0416 16:30:09.993461   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
2024/04/16 16:30:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004501206s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.237:30122
functional_test.go:1671: http://192.168.39.237:30122: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-dwjkd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.237:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.237:30122
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5fc39e1f-92ce-4e7a-85d7-f5b3b2c83a6f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004996755s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-505303 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-505303 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-505303 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-505303 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-505303 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [26a7b133-f13f-4197-82eb-728082651452] Pending
helpers_test.go:344: "sp-pod" [26a7b133-f13f-4197-82eb-728082651452] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [26a7b133-f13f-4197-82eb-728082651452] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004785034s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-505303 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-505303 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-505303 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fa10fdaa-1537-4c05-bfee-bf23a52e4ff4] Pending
helpers_test.go:344: "sp-pod" [fa10fdaa-1537-4c05-bfee-bf23a52e4ff4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fa10fdaa-1537-4c05-bfee-bf23a52e4ff4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005400551s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-505303 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh -n functional-505303 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cp functional-505303:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4058493130/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh -n functional-505303 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh -n functional-505303 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-505303 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ktjlw" [0aa9196c-3ca8-4fc9-845d-7e2ee64c5007] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ktjlw" [0aa9196c-3ca8-4fc9-845d-7e2ee64c5007] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.005323122s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;": exit status 1 (241.847499ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;": exit status 1 (234.501959ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;": exit status 1 (275.602298ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;": exit status 1 (202.79342ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-505303 exec mysql-859648c796-ktjlw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10952/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo cat /etc/test/nested/copy/10952/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10952.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo cat /etc/ssl/certs/10952.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10952.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo cat /usr/share/ca-certificates/10952.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/109522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo cat /etc/ssl/certs/109522.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/109522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo cat /usr/share/ca-certificates/109522.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-505303 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 ssh "sudo systemctl is-active docker": exit status 1 (251.14589ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 ssh "sudo systemctl is-active crio": exit status 1 (300.700267ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-505303 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-505303 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-ntptn" [ad7ec98e-119b-4a5b-b31e-8701b3fd484f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-ntptn" [ad7ec98e-119b-4a5b-b31e-8701b3fd484f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.010786088s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdany-port2000432670/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713284993166162944" to /tmp/TestFunctionalparallelMountCmdany-port2000432670/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713284993166162944" to /tmp/TestFunctionalparallelMountCmdany-port2000432670/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713284993166162944" to /tmp/TestFunctionalparallelMountCmdany-port2000432670/001/test-1713284993166162944
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.48052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 16 16:29 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 16 16:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 16 16:29 test-1713284993166162944
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh cat /mount-9p/test-1713284993166162944
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-505303 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d81919e7-fc37-43b0-a36f-6d01c459e8fb] Pending
helpers_test.go:344: "busybox-mount" [d81919e7-fc37-43b0-a36f-6d01c459e8fb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d81919e7-fc37-43b0-a36f-6d01c459e8fb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d81919e7-fc37-43b0-a36f-6d01c459e8fb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004320196s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-505303 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdany-port2000432670/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "267.515125ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "65.062362ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "271.286635ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "75.645137ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdspecific-port242135488/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.244824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdspecific-port242135488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 ssh "sudo umount -f /mount-9p": exit status 1 (209.161016ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-505303 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdspecific-port242135488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 service list -o json
functional_test.go:1490: Took "960.936597ms" to run "out/minikube-linux-amd64 -p functional-505303 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128959842/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128959842/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128959842/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T" /mount1: exit status 1 (381.295943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-505303 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128959842/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128959842/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-505303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128959842/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.237:31973
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.237:31973
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-505303 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-505303
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-505303
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-505303 image ls --format short --alsologtostderr:
I0416 16:30:35.251469   20271 out.go:291] Setting OutFile to fd 1 ...
I0416 16:30:35.251581   20271 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.251591   20271 out.go:304] Setting ErrFile to fd 2...
I0416 16:30:35.251595   20271 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.251778   20271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
I0416 16:30:35.252571   20271 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.252718   20271 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.253419   20271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.253480   20271 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.269529   20271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
I0416 16:30:35.270043   20271 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.270713   20271 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.270741   20271 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.271173   20271 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.271374   20271 main.go:141] libmachine: (functional-505303) Calling .GetState
I0416 16:30:35.273390   20271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.273430   20271 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.288537   20271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
I0416 16:30:35.289079   20271 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.289650   20271 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.289667   20271 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.289955   20271 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.290170   20271 main.go:141] libmachine: (functional-505303) Calling .DriverName
I0416 16:30:35.290454   20271 ssh_runner.go:195] Run: systemctl --version
I0416 16:30:35.290480   20271 main.go:141] libmachine: (functional-505303) Calling .GetSSHHostname
I0416 16:30:35.293197   20271 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.293762   20271 main.go:141] libmachine: (functional-505303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:70", ip: ""} in network mk-functional-505303: {Iface:virbr1 ExpiryTime:2024-04-16 17:26:40 +0000 UTC Type:0 Mac:52:54:00:5a:91:70 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:functional-505303 Clientid:01:52:54:00:5a:91:70}
I0416 16:30:35.293805   20271 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined IP address 192.168.39.237 and MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.293895   20271 main.go:141] libmachine: (functional-505303) Calling .GetSSHPort
I0416 16:30:35.294090   20271 main.go:141] libmachine: (functional-505303) Calling .GetSSHKeyPath
I0416 16:30:35.294376   20271 main.go:141] libmachine: (functional-505303) Calling .GetSSHUsername
I0416 16:30:35.294704   20271 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/functional-505303/id_rsa Username:docker}
I0416 16:30:35.383662   20271 ssh_runner.go:195] Run: sudo crictl images --output json
I0416 16:30:35.448614   20271 main.go:141] libmachine: Making call to close driver server
I0416 16:30:35.448635   20271 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:35.448959   20271 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:35.448985   20271 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:30:35.448994   20271 main.go:141] libmachine: Making call to close driver server
I0416 16:30:35.449002   20271 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:35.449264   20271 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:35.449279   20271 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-505303 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-505303  | sha256:d65677 | 991B   |
| gcr.io/google-containers/addon-resizer      | functional-505303  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:39f995 | 35.1MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:6052a2 | 33.5MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:a1d263 | 28.4MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:8c390d | 18.6MB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                     | latest             | sha256:c613f1 | 70.5MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-505303 image ls --format table --alsologtostderr:
I0416 16:30:35.773677   20383 out.go:291] Setting OutFile to fd 1 ...
I0416 16:30:35.773920   20383 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.773930   20383 out.go:304] Setting ErrFile to fd 2...
I0416 16:30:35.773935   20383 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.774095   20383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
I0416 16:30:35.774619   20383 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.774719   20383 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.775072   20383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.775129   20383 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.790748   20383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
I0416 16:30:35.791219   20383 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.791794   20383 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.791821   20383 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.792154   20383 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.792346   20383 main.go:141] libmachine: (functional-505303) Calling .GetState
I0416 16:30:35.794156   20383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.794227   20383 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.809849   20383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
I0416 16:30:35.810261   20383 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.810801   20383 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.810827   20383 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.811214   20383 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.811425   20383 main.go:141] libmachine: (functional-505303) Calling .DriverName
I0416 16:30:35.811618   20383 ssh_runner.go:195] Run: systemctl --version
I0416 16:30:35.811644   20383 main.go:141] libmachine: (functional-505303) Calling .GetSSHHostname
I0416 16:30:35.814553   20383 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.814948   20383 main.go:141] libmachine: (functional-505303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:70", ip: ""} in network mk-functional-505303: {Iface:virbr1 ExpiryTime:2024-04-16 17:26:40 +0000 UTC Type:0 Mac:52:54:00:5a:91:70 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:functional-505303 Clientid:01:52:54:00:5a:91:70}
I0416 16:30:35.814974   20383 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined IP address 192.168.39.237 and MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.815063   20383 main.go:141] libmachine: (functional-505303) Calling .GetSSHPort
I0416 16:30:35.815261   20383 main.go:141] libmachine: (functional-505303) Calling .GetSSHKeyPath
I0416 16:30:35.815421   20383 main.go:141] libmachine: (functional-505303) Calling .GetSSHUsername
I0416 16:30:35.815566   20383 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/functional-505303/id_rsa Username:docker}
I0416 16:30:35.898843   20383 ssh_runner.go:195] Run: sudo crictl images --output json
I0416 16:30:35.952731   20383 main.go:141] libmachine: Making call to close driver server
I0416 16:30:35.952745   20383 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:35.953027   20383 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:35.953041   20383 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:30:35.953049   20383 main.go:141] libmachine: Making call to close driver server
I0416 16:30:35.953073   20383 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:35.953075   20383 main.go:141] libmachine: (functional-505303) DBG | Closing plugin on server side
I0416 16:30:35.953278   20383 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:35.953295   20383 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-505303 image ls --format json --alsologtostderr:
[{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"35100536"},{"id":"sha256:d6567720d7bfe8cfb7c58da33acb4c38caa0342e2894c5c88f822fcb735b39a1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-505303"],"size":"991"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:0184c1613d92931126feb4c548e
5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-505303"],"size":"10823156"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857
b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":["docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1"],"repoTags":["docker.io/library/nginx:latest"],"size":"70542235"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDiges
ts":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"33466661"},{"id":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"28398741"},{"id":"sha256:0765
5ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"18553260"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-505303 image ls --format json --alsologtostderr:
I0416 16:30:35.524409   20327 out.go:291] Setting OutFile to fd 1 ...
I0416 16:30:35.524526   20327 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.524536   20327 out.go:304] Setting ErrFile to fd 2...
I0416 16:30:35.524540   20327 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.524750   20327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
I0416 16:30:35.525359   20327 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.525460   20327 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.525832   20327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.525891   20327 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.541253   20327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42019
I0416 16:30:35.541815   20327 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.542446   20327 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.542476   20327 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.542868   20327 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.543080   20327 main.go:141] libmachine: (functional-505303) Calling .GetState
I0416 16:30:35.545154   20327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.545205   20327 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.560544   20327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
I0416 16:30:35.561019   20327 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.561429   20327 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.561450   20327 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.561787   20327 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.561972   20327 main.go:141] libmachine: (functional-505303) Calling .DriverName
I0416 16:30:35.562165   20327 ssh_runner.go:195] Run: systemctl --version
I0416 16:30:35.562190   20327 main.go:141] libmachine: (functional-505303) Calling .GetSSHHostname
I0416 16:30:35.565194   20327 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.565570   20327 main.go:141] libmachine: (functional-505303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:70", ip: ""} in network mk-functional-505303: {Iface:virbr1 ExpiryTime:2024-04-16 17:26:40 +0000 UTC Type:0 Mac:52:54:00:5a:91:70 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:functional-505303 Clientid:01:52:54:00:5a:91:70}
I0416 16:30:35.565620   20327 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined IP address 192.168.39.237 and MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.565820   20327 main.go:141] libmachine: (functional-505303) Calling .GetSSHPort
I0416 16:30:35.565985   20327 main.go:141] libmachine: (functional-505303) Calling .GetSSHKeyPath
I0416 16:30:35.566144   20327 main.go:141] libmachine: (functional-505303) Calling .GetSSHUsername
I0416 16:30:35.566282   20327 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/functional-505303/id_rsa Username:docker}
I0416 16:30:35.655924   20327 ssh_runner.go:195] Run: sudo crictl images --output json
I0416 16:30:35.705466   20327 main.go:141] libmachine: Making call to close driver server
I0416 16:30:35.705484   20327 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:35.705751   20327 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:35.705773   20327 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:30:35.705780   20327 main.go:141] libmachine: Making call to close driver server
I0416 16:30:35.705785   20327 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:35.706015   20327 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:35.706034   20327 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:30:35.706050   20327 main.go:141] libmachine: (functional-505303) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-505303 image ls --format yaml --alsologtostderr:
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "35100536"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "28398741"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-505303
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:d6567720d7bfe8cfb7c58da33acb4c38caa0342e2894c5c88f822fcb735b39a1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-505303
size: "991"
- id: sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "18553260"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests:
- docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1
repoTags:
- docker.io/library/nginx:latest
size: "70542235"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "33466661"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-505303 image ls --format yaml --alsologtostderr:
I0416 16:30:35.248081   20270 out.go:291] Setting OutFile to fd 1 ...
I0416 16:30:35.248254   20270 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.248268   20270 out.go:304] Setting ErrFile to fd 2...
I0416 16:30:35.248274   20270 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.248584   20270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
I0416 16:30:35.249402   20270 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.249547   20270 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.250167   20270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.250236   20270 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.265098   20270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
I0416 16:30:35.265652   20270 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.266264   20270 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.266290   20270 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.266640   20270 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.266869   20270 main.go:141] libmachine: (functional-505303) Calling .GetState
I0416 16:30:35.269002   20270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.269047   20270 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.284922   20270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34863
I0416 16:30:35.285394   20270 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.285968   20270 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.285991   20270 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.286317   20270 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.286540   20270 main.go:141] libmachine: (functional-505303) Calling .DriverName
I0416 16:30:35.286720   20270 ssh_runner.go:195] Run: systemctl --version
I0416 16:30:35.286742   20270 main.go:141] libmachine: (functional-505303) Calling .GetSSHHostname
I0416 16:30:35.289613   20270 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.290014   20270 main.go:141] libmachine: (functional-505303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:70", ip: ""} in network mk-functional-505303: {Iface:virbr1 ExpiryTime:2024-04-16 17:26:40 +0000 UTC Type:0 Mac:52:54:00:5a:91:70 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:functional-505303 Clientid:01:52:54:00:5a:91:70}
I0416 16:30:35.290041   20270 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined IP address 192.168.39.237 and MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.290174   20270 main.go:141] libmachine: (functional-505303) Calling .GetSSHPort
I0416 16:30:35.290329   20270 main.go:141] libmachine: (functional-505303) Calling .GetSSHKeyPath
I0416 16:30:35.290494   20270 main.go:141] libmachine: (functional-505303) Calling .GetSSHUsername
I0416 16:30:35.290621   20270 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/functional-505303/id_rsa Username:docker}
I0416 16:30:35.380007   20270 ssh_runner.go:195] Run: sudo crictl images --output json
I0416 16:30:35.435716   20270 main.go:141] libmachine: Making call to close driver server
I0416 16:30:35.435732   20270 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:35.436009   20270 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:35.436033   20270 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:30:35.436042   20270 main.go:141] libmachine: Making call to close driver server
I0416 16:30:35.436045   20270 main.go:141] libmachine: (functional-505303) DBG | Closing plugin on server side
I0416 16:30:35.436049   20270 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:35.436281   20270 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:35.436310   20270 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:30:35.436341   20270 main.go:141] libmachine: (functional-505303) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-505303 ssh pgrep buildkitd: exit status 1 (224.112024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image build -t localhost/my-image:functional-505303 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 image build -t localhost/my-image:functional-505303 testdata/build --alsologtostderr: (2.336375152s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-505303 image build -t localhost/my-image:functional-505303 testdata/build --alsologtostderr:
I0416 16:30:35.725306   20371 out.go:291] Setting OutFile to fd 1 ...
I0416 16:30:35.725537   20371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.725549   20371 out.go:304] Setting ErrFile to fd 2...
I0416 16:30:35.725556   20371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:30:35.725849   20371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
I0416 16:30:35.726684   20371 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.727307   20371 config.go:182] Loaded profile config "functional-505303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0416 16:30:35.727685   20371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.727741   20371 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.743890   20371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
I0416 16:30:35.744359   20371 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.744931   20371 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.744955   20371 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.745339   20371 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.745564   20371 main.go:141] libmachine: (functional-505303) Calling .GetState
I0416 16:30:35.747549   20371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0416 16:30:35.747598   20371 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:30:35.764191   20371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
I0416 16:30:35.764613   20371 main.go:141] libmachine: () Calling .GetVersion
I0416 16:30:35.765191   20371 main.go:141] libmachine: Using API Version  1
I0416 16:30:35.765218   20371 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:30:35.765627   20371 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:30:35.765838   20371 main.go:141] libmachine: (functional-505303) Calling .DriverName
I0416 16:30:35.766055   20371 ssh_runner.go:195] Run: systemctl --version
I0416 16:30:35.766084   20371 main.go:141] libmachine: (functional-505303) Calling .GetSSHHostname
I0416 16:30:35.769285   20371 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.769705   20371 main.go:141] libmachine: (functional-505303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:70", ip: ""} in network mk-functional-505303: {Iface:virbr1 ExpiryTime:2024-04-16 17:26:40 +0000 UTC Type:0 Mac:52:54:00:5a:91:70 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:functional-505303 Clientid:01:52:54:00:5a:91:70}
I0416 16:30:35.769738   20371 main.go:141] libmachine: (functional-505303) DBG | domain functional-505303 has defined IP address 192.168.39.237 and MAC address 52:54:00:5a:91:70 in network mk-functional-505303
I0416 16:30:35.769918   20371 main.go:141] libmachine: (functional-505303) Calling .GetSSHPort
I0416 16:30:35.770114   20371 main.go:141] libmachine: (functional-505303) Calling .GetSSHKeyPath
I0416 16:30:35.770276   20371 main.go:141] libmachine: (functional-505303) Calling .GetSSHUsername
I0416 16:30:35.770412   20371 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/functional-505303/id_rsa Username:docker}
I0416 16:30:35.856307   20371 build_images.go:161] Building image from path: /tmp/build.4238436476.tar
I0416 16:30:35.856363   20371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0416 16:30:35.870914   20371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4238436476.tar
I0416 16:30:35.876971   20371 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4238436476.tar: stat -c "%s %y" /var/lib/minikube/build/build.4238436476.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4238436476.tar': No such file or directory
I0416 16:30:35.877016   20371 ssh_runner.go:362] scp /tmp/build.4238436476.tar --> /var/lib/minikube/build/build.4238436476.tar (3072 bytes)
I0416 16:30:35.912958   20371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4238436476
I0416 16:30:35.930738   20371 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4238436476 -xf /var/lib/minikube/build/build.4238436476.tar
I0416 16:30:35.951240   20371 containerd.go:394] Building image: /var/lib/minikube/build/build.4238436476
I0416 16:30:35.951325   20371 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4238436476 --local dockerfile=/var/lib/minikube/build/build.4238436476 --output type=image,name=localhost/my-image:functional-505303
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:eb2553596943e42b1d08207bc79ddd987c844ad6b03ac68010f47d89c4451975
#8 exporting manifest sha256:eb2553596943e42b1d08207bc79ddd987c844ad6b03ac68010f47d89c4451975 0.0s done
#8 exporting config sha256:0bbf17c38e85d2417db184423a6be477f4c6b4aab0fc12bd2f79f4bb8be420bb 0.0s done
#8 naming to localhost/my-image:functional-505303 done
#8 DONE 0.3s
I0416 16:30:37.966745   20371 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4238436476 --local dockerfile=/var/lib/minikube/build/build.4238436476 --output type=image,name=localhost/my-image:functional-505303: (2.015387373s)
I0416 16:30:37.966819   20371 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4238436476
I0416 16:30:37.984816   20371 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4238436476.tar
I0416 16:30:37.996974   20371 build_images.go:217] Built localhost/my-image:functional-505303 from /tmp/build.4238436476.tar
I0416 16:30:37.997014   20371 build_images.go:133] succeeded building to: functional-505303
I0416 16:30:37.997020   20371 build_images.go:134] failed building to: 
I0416 16:30:37.997049   20371 main.go:141] libmachine: Making call to close driver server
I0416 16:30:37.997062   20371 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:37.997412   20371 main.go:141] libmachine: (functional-505303) DBG | Closing plugin on server side
I0416 16:30:37.997412   20371 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:37.997473   20371 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:30:37.997483   20371 main.go:141] libmachine: Making call to close driver server
I0416 16:30:37.997492   20371 main.go:141] libmachine: (functional-505303) Calling .Close
I0416 16:30:37.997759   20371 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:30:37.997787   20371 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:30:37.997826   20371 main.go:141] libmachine: (functional-505303) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-505303
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image load --daemon gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 image load --daemon gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr: (6.20332736s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image load --daemon gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 image load --daemon gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr: (3.407946195s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.074613066s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-505303
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image load --daemon gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 image load --daemon gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr: (5.282629214s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image save gcr.io/google-containers/addon-resizer:functional-505303 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 image save gcr.io/google-containers/addon-resizer:functional-505303 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.551158098s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image rm gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.619259821s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-505303
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-505303 image save --daemon gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-505303 image save --daemon gcr.io/google-containers/addon-resizer:functional-505303 --alsologtostderr: (1.631961201s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-505303
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-505303
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-505303
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-505303
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587453 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0416 16:32:26.149758   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:32:53.834282   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-587453 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m27.209684479s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (207.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-587453 -- rollout status deployment/busybox: (2.335592415s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-48dj6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-lmrfw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-n7clk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-48dj6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-lmrfw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-n7clk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-48dj6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-lmrfw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-n7clk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-48dj6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-48dj6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-lmrfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-lmrfw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-n7clk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587453 -- exec busybox-7fdf7869d9-n7clk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-587453 -v=7 --alsologtostderr
E0416 16:34:52.967534   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:52.972852   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:52.983207   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:53.003581   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:53.043937   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:53.124320   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:53.284754   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:53.604882   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:54.245917   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:55.526970   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:34:58.087624   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-587453 -v=7 --alsologtostderr: (45.007306562s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-587453 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
E0416 16:35:03.208245   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp testdata/cp-test.txt ha-587453:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3755849957/001/cp-test_ha-587453.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453:/home/docker/cp-test.txt ha-587453-m02:/home/docker/cp-test_ha-587453_ha-587453-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m02 "sudo cat /home/docker/cp-test_ha-587453_ha-587453-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453:/home/docker/cp-test.txt ha-587453-m03:/home/docker/cp-test_ha-587453_ha-587453-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m03 "sudo cat /home/docker/cp-test_ha-587453_ha-587453-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453:/home/docker/cp-test.txt ha-587453-m04:/home/docker/cp-test_ha-587453_ha-587453-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m04 "sudo cat /home/docker/cp-test_ha-587453_ha-587453-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp testdata/cp-test.txt ha-587453-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3755849957/001/cp-test_ha-587453-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m02:/home/docker/cp-test.txt ha-587453:/home/docker/cp-test_ha-587453-m02_ha-587453.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453 "sudo cat /home/docker/cp-test_ha-587453-m02_ha-587453.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m02:/home/docker/cp-test.txt ha-587453-m03:/home/docker/cp-test_ha-587453-m02_ha-587453-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m03 "sudo cat /home/docker/cp-test_ha-587453-m02_ha-587453-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m02:/home/docker/cp-test.txt ha-587453-m04:/home/docker/cp-test_ha-587453-m02_ha-587453-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m04 "sudo cat /home/docker/cp-test_ha-587453-m02_ha-587453-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp testdata/cp-test.txt ha-587453-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3755849957/001/cp-test_ha-587453-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m03:/home/docker/cp-test.txt ha-587453:/home/docker/cp-test_ha-587453-m03_ha-587453.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453 "sudo cat /home/docker/cp-test_ha-587453-m03_ha-587453.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m03:/home/docker/cp-test.txt ha-587453-m02:/home/docker/cp-test_ha-587453-m03_ha-587453-m02.txt
E0416 16:35:13.448531   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m02 "sudo cat /home/docker/cp-test_ha-587453-m03_ha-587453-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m03:/home/docker/cp-test.txt ha-587453-m04:/home/docker/cp-test_ha-587453-m03_ha-587453-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m04 "sudo cat /home/docker/cp-test_ha-587453-m03_ha-587453-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp testdata/cp-test.txt ha-587453-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3755849957/001/cp-test_ha-587453-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m04:/home/docker/cp-test.txt ha-587453:/home/docker/cp-test_ha-587453-m04_ha-587453.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453 "sudo cat /home/docker/cp-test_ha-587453-m04_ha-587453.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m04:/home/docker/cp-test.txt ha-587453-m02:/home/docker/cp-test_ha-587453-m04_ha-587453-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m02 "sudo cat /home/docker/cp-test_ha-587453-m04_ha-587453-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 cp ha-587453-m04:/home/docker/cp-test.txt ha-587453-m03:/home/docker/cp-test_ha-587453-m04_ha-587453-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 ssh -n ha-587453-m03 "sudo cat /home/docker/cp-test_ha-587453-m04_ha-587453-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 node stop m02 -v=7 --alsologtostderr
E0416 16:35:33.929542   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:36:14.889976   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-587453 node stop m02 -v=7 --alsologtostderr: (1m32.495813779s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr: exit status 7 (695.047672ms)

                                                
                                                
-- stdout --
	ha-587453
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-587453-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587453-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-587453-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:36:50.712739   24943 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:36:50.713082   24943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:36:50.713098   24943 out.go:304] Setting ErrFile to fd 2...
	I0416 16:36:50.713105   24943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:36:50.713438   24943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:36:50.713738   24943 out.go:298] Setting JSON to false
	I0416 16:36:50.713784   24943 mustload.go:65] Loading cluster: ha-587453
	I0416 16:36:50.713907   24943 notify.go:220] Checking for updates...
	I0416 16:36:50.714359   24943 config.go:182] Loaded profile config "ha-587453": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:36:50.714380   24943 status.go:255] checking status of ha-587453 ...
	I0416 16:36:50.714959   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:50.715042   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:50.735384   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0416 16:36:50.735896   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:50.736569   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:50.736591   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:50.736989   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:50.737207   24943 main.go:141] libmachine: (ha-587453) Calling .GetState
	I0416 16:36:50.739126   24943 status.go:330] ha-587453 host status = "Running" (err=<nil>)
	I0416 16:36:50.739164   24943 host.go:66] Checking if "ha-587453" exists ...
	I0416 16:36:50.739589   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:50.739638   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:50.756302   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I0416 16:36:50.756728   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:50.757240   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:50.757275   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:50.757588   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:50.757752   24943 main.go:141] libmachine: (ha-587453) Calling .GetIP
	I0416 16:36:50.760973   24943 main.go:141] libmachine: (ha-587453) DBG | domain ha-587453 has defined MAC address 52:54:00:26:5c:e0 in network mk-ha-587453
	I0416 16:36:50.761486   24943 main.go:141] libmachine: (ha-587453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:5c:e0", ip: ""} in network mk-ha-587453: {Iface:virbr1 ExpiryTime:2024-04-16 17:30:59 +0000 UTC Type:0 Mac:52:54:00:26:5c:e0 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-587453 Clientid:01:52:54:00:26:5c:e0}
	I0416 16:36:50.761517   24943 main.go:141] libmachine: (ha-587453) DBG | domain ha-587453 has defined IP address 192.168.39.194 and MAC address 52:54:00:26:5c:e0 in network mk-ha-587453
	I0416 16:36:50.761656   24943 host.go:66] Checking if "ha-587453" exists ...
	I0416 16:36:50.762117   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:50.762174   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:50.777894   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0416 16:36:50.778256   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:50.778876   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:50.778907   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:50.779231   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:50.779437   24943 main.go:141] libmachine: (ha-587453) Calling .DriverName
	I0416 16:36:50.779618   24943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:36:50.779641   24943 main.go:141] libmachine: (ha-587453) Calling .GetSSHHostname
	I0416 16:36:50.782559   24943 main.go:141] libmachine: (ha-587453) DBG | domain ha-587453 has defined MAC address 52:54:00:26:5c:e0 in network mk-ha-587453
	I0416 16:36:50.783185   24943 main.go:141] libmachine: (ha-587453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:5c:e0", ip: ""} in network mk-ha-587453: {Iface:virbr1 ExpiryTime:2024-04-16 17:30:59 +0000 UTC Type:0 Mac:52:54:00:26:5c:e0 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-587453 Clientid:01:52:54:00:26:5c:e0}
	I0416 16:36:50.783215   24943 main.go:141] libmachine: (ha-587453) DBG | domain ha-587453 has defined IP address 192.168.39.194 and MAC address 52:54:00:26:5c:e0 in network mk-ha-587453
	I0416 16:36:50.783426   24943 main.go:141] libmachine: (ha-587453) Calling .GetSSHPort
	I0416 16:36:50.783620   24943 main.go:141] libmachine: (ha-587453) Calling .GetSSHKeyPath
	I0416 16:36:50.783803   24943 main.go:141] libmachine: (ha-587453) Calling .GetSSHUsername
	I0416 16:36:50.783959   24943 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/ha-587453/id_rsa Username:docker}
	I0416 16:36:50.871231   24943 ssh_runner.go:195] Run: systemctl --version
	I0416 16:36:50.880089   24943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:36:50.898931   24943 kubeconfig.go:125] found "ha-587453" server: "https://192.168.39.254:8443"
	I0416 16:36:50.898967   24943 api_server.go:166] Checking apiserver status ...
	I0416 16:36:50.899004   24943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:36:50.915747   24943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1133/cgroup
	W0416 16:36:50.928677   24943 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1133/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:36:50.928737   24943 ssh_runner.go:195] Run: ls
	I0416 16:36:50.934642   24943 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:36:50.942030   24943 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:36:50.942064   24943 status.go:422] ha-587453 apiserver status = Running (err=<nil>)
	I0416 16:36:50.942076   24943 status.go:257] ha-587453 status: &{Name:ha-587453 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:36:50.942098   24943 status.go:255] checking status of ha-587453-m02 ...
	I0416 16:36:50.942513   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:50.942540   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:50.957703   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44571
	I0416 16:36:50.958098   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:50.958606   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:50.958630   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:50.958964   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:50.959211   24943 main.go:141] libmachine: (ha-587453-m02) Calling .GetState
	I0416 16:36:50.960926   24943 status.go:330] ha-587453-m02 host status = "Stopped" (err=<nil>)
	I0416 16:36:50.960939   24943 status.go:343] host is not running, skipping remaining checks
	I0416 16:36:50.960945   24943 status.go:257] ha-587453-m02 status: &{Name:ha-587453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:36:50.960960   24943 status.go:255] checking status of ha-587453-m03 ...
	I0416 16:36:50.961223   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:50.961255   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:50.976364   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I0416 16:36:50.976743   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:50.977213   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:50.977239   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:50.977527   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:50.977721   24943 main.go:141] libmachine: (ha-587453-m03) Calling .GetState
	I0416 16:36:50.979347   24943 status.go:330] ha-587453-m03 host status = "Running" (err=<nil>)
	I0416 16:36:50.979370   24943 host.go:66] Checking if "ha-587453-m03" exists ...
	I0416 16:36:50.979720   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:50.979760   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:50.995065   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0416 16:36:50.995555   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:50.996016   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:50.996035   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:50.996329   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:50.996498   24943 main.go:141] libmachine: (ha-587453-m03) Calling .GetIP
	I0416 16:36:50.999448   24943 main.go:141] libmachine: (ha-587453-m03) DBG | domain ha-587453-m03 has defined MAC address 52:54:00:7d:79:c0 in network mk-ha-587453
	I0416 16:36:50.999917   24943 main.go:141] libmachine: (ha-587453-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:79:c0", ip: ""} in network mk-ha-587453: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:15 +0000 UTC Type:0 Mac:52:54:00:7d:79:c0 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-587453-m03 Clientid:01:52:54:00:7d:79:c0}
	I0416 16:36:50.999942   24943 main.go:141] libmachine: (ha-587453-m03) DBG | domain ha-587453-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:7d:79:c0 in network mk-ha-587453
	I0416 16:36:51.000065   24943 host.go:66] Checking if "ha-587453-m03" exists ...
	I0416 16:36:51.000368   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:51.000481   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:51.015537   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45293
	I0416 16:36:51.015986   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:51.016460   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:51.016484   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:51.016794   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:51.016984   24943 main.go:141] libmachine: (ha-587453-m03) Calling .DriverName
	I0416 16:36:51.017161   24943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:36:51.017178   24943 main.go:141] libmachine: (ha-587453-m03) Calling .GetSSHHostname
	I0416 16:36:51.020255   24943 main.go:141] libmachine: (ha-587453-m03) DBG | domain ha-587453-m03 has defined MAC address 52:54:00:7d:79:c0 in network mk-ha-587453
	I0416 16:36:51.020770   24943 main.go:141] libmachine: (ha-587453-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:79:c0", ip: ""} in network mk-ha-587453: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:15 +0000 UTC Type:0 Mac:52:54:00:7d:79:c0 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-587453-m03 Clientid:01:52:54:00:7d:79:c0}
	I0416 16:36:51.020793   24943 main.go:141] libmachine: (ha-587453-m03) DBG | domain ha-587453-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:7d:79:c0 in network mk-ha-587453
	I0416 16:36:51.020964   24943 main.go:141] libmachine: (ha-587453-m03) Calling .GetSSHPort
	I0416 16:36:51.021154   24943 main.go:141] libmachine: (ha-587453-m03) Calling .GetSSHKeyPath
	I0416 16:36:51.021322   24943 main.go:141] libmachine: (ha-587453-m03) Calling .GetSSHUsername
	I0416 16:36:51.021459   24943 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/ha-587453-m03/id_rsa Username:docker}
	I0416 16:36:51.113082   24943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:36:51.135402   24943 kubeconfig.go:125] found "ha-587453" server: "https://192.168.39.254:8443"
	I0416 16:36:51.135426   24943 api_server.go:166] Checking apiserver status ...
	I0416 16:36:51.135457   24943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:36:51.152812   24943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup
	W0416 16:36:51.164580   24943 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:36:51.164644   24943 ssh_runner.go:195] Run: ls
	I0416 16:36:51.170602   24943 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:36:51.175269   24943 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:36:51.175295   24943 status.go:422] ha-587453-m03 apiserver status = Running (err=<nil>)
	I0416 16:36:51.175303   24943 status.go:257] ha-587453-m03 status: &{Name:ha-587453-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:36:51.175319   24943 status.go:255] checking status of ha-587453-m04 ...
	I0416 16:36:51.175606   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:51.175636   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:51.191757   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0416 16:36:51.192165   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:51.192611   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:51.192629   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:51.192985   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:51.193181   24943 main.go:141] libmachine: (ha-587453-m04) Calling .GetState
	I0416 16:36:51.194954   24943 status.go:330] ha-587453-m04 host status = "Running" (err=<nil>)
	I0416 16:36:51.194973   24943 host.go:66] Checking if "ha-587453-m04" exists ...
	I0416 16:36:51.195319   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:51.195342   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:51.210291   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41651
	I0416 16:36:51.210746   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:51.211296   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:51.211323   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:51.211653   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:51.211867   24943 main.go:141] libmachine: (ha-587453-m04) Calling .GetIP
	I0416 16:36:51.214361   24943 main.go:141] libmachine: (ha-587453-m04) DBG | domain ha-587453-m04 has defined MAC address 52:54:00:8e:63:32 in network mk-ha-587453
	I0416 16:36:51.214772   24943 main.go:141] libmachine: (ha-587453-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:63:32", ip: ""} in network mk-ha-587453: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:34 +0000 UTC Type:0 Mac:52:54:00:8e:63:32 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-587453-m04 Clientid:01:52:54:00:8e:63:32}
	I0416 16:36:51.214801   24943 main.go:141] libmachine: (ha-587453-m04) DBG | domain ha-587453-m04 has defined IP address 192.168.39.69 and MAC address 52:54:00:8e:63:32 in network mk-ha-587453
	I0416 16:36:51.214936   24943 host.go:66] Checking if "ha-587453-m04" exists ...
	I0416 16:36:51.215331   24943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:36:51.215375   24943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:36:51.230133   24943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0416 16:36:51.230589   24943 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:36:51.231043   24943 main.go:141] libmachine: Using API Version  1
	I0416 16:36:51.231064   24943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:36:51.231390   24943 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:36:51.231601   24943 main.go:141] libmachine: (ha-587453-m04) Calling .DriverName
	I0416 16:36:51.231798   24943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:36:51.231829   24943 main.go:141] libmachine: (ha-587453-m04) Calling .GetSSHHostname
	I0416 16:36:51.234758   24943 main.go:141] libmachine: (ha-587453-m04) DBG | domain ha-587453-m04 has defined MAC address 52:54:00:8e:63:32 in network mk-ha-587453
	I0416 16:36:51.235197   24943 main.go:141] libmachine: (ha-587453-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:63:32", ip: ""} in network mk-ha-587453: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:34 +0000 UTC Type:0 Mac:52:54:00:8e:63:32 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-587453-m04 Clientid:01:52:54:00:8e:63:32}
	I0416 16:36:51.235227   24943 main.go:141] libmachine: (ha-587453-m04) DBG | domain ha-587453-m04 has defined IP address 192.168.39.69 and MAC address 52:54:00:8e:63:32 in network mk-ha-587453
	I0416 16:36:51.235394   24943 main.go:141] libmachine: (ha-587453-m04) Calling .GetSSHPort
	I0416 16:36:51.235546   24943 main.go:141] libmachine: (ha-587453-m04) Calling .GetSSHKeyPath
	I0416 16:36:51.235686   24943 main.go:141] libmachine: (ha-587453-m04) Calling .GetSSHUsername
	I0416 16:36:51.235818   24943 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/ha-587453-m04/id_rsa Username:docker}
	I0416 16:36:51.324867   24943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:36:51.346505   24943 status.go:257] ha-587453-m04 status: &{Name:ha-587453-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (93.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 node start m02 -v=7 --alsologtostderr
E0416 16:37:26.148686   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-587453 node start m02 -v=7 --alsologtostderr: (43.289408376s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (433.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-587453 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-587453 -v=7 --alsologtostderr
E0416 16:37:36.810705   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:39:52.967205   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:40:20.651677   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-587453 -v=7 --alsologtostderr: (4m38.696411562s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587453 --wait=true -v=7 --alsologtostderr
E0416 16:42:26.149652   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 16:43:49.195190   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-587453 --wait=true -v=7 --alsologtostderr: (2m34.744048256s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-587453
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (433.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 node delete m03 -v=7 --alsologtostderr
E0416 16:44:52.967304   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-587453 node delete m03 -v=7 --alsologtostderr: (6.651908314s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (276.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 stop -v=7 --alsologtostderr
E0416 16:47:26.149246   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-587453 stop -v=7 --alsologtostderr: (4m36.42231228s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr: exit status 7 (124.103814ms)

                                                
                                                
-- stdout --
	ha-587453
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587453-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587453-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:49:34.463339   28802 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:49:34.463614   28802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:49:34.463625   28802 out.go:304] Setting ErrFile to fd 2...
	I0416 16:49:34.463630   28802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:49:34.463803   28802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 16:49:34.464009   28802 out.go:298] Setting JSON to false
	I0416 16:49:34.464040   28802 mustload.go:65] Loading cluster: ha-587453
	I0416 16:49:34.464175   28802 notify.go:220] Checking for updates...
	I0416 16:49:34.464573   28802 config.go:182] Loaded profile config "ha-587453": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 16:49:34.464593   28802 status.go:255] checking status of ha-587453 ...
	I0416 16:49:34.465091   28802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:49:34.465153   28802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:34.489046   28802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43483
	I0416 16:49:34.489593   28802 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:34.490330   28802 main.go:141] libmachine: Using API Version  1
	I0416 16:49:34.490354   28802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:34.490769   28802 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:34.491074   28802 main.go:141] libmachine: (ha-587453) Calling .GetState
	I0416 16:49:34.492680   28802 status.go:330] ha-587453 host status = "Stopped" (err=<nil>)
	I0416 16:49:34.492693   28802 status.go:343] host is not running, skipping remaining checks
	I0416 16:49:34.492698   28802 status.go:257] ha-587453 status: &{Name:ha-587453 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:49:34.492739   28802 status.go:255] checking status of ha-587453-m02 ...
	I0416 16:49:34.493068   28802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:49:34.493617   28802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:34.509123   28802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0416 16:49:34.509535   28802 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:34.510098   28802 main.go:141] libmachine: Using API Version  1
	I0416 16:49:34.510118   28802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:34.510425   28802 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:34.510630   28802 main.go:141] libmachine: (ha-587453-m02) Calling .GetState
	I0416 16:49:34.512277   28802 status.go:330] ha-587453-m02 host status = "Stopped" (err=<nil>)
	I0416 16:49:34.512290   28802 status.go:343] host is not running, skipping remaining checks
	I0416 16:49:34.512296   28802 status.go:257] ha-587453-m02 status: &{Name:ha-587453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:49:34.512324   28802 status.go:255] checking status of ha-587453-m04 ...
	I0416 16:49:34.512606   28802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 16:49:34.512648   28802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:34.527496   28802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0416 16:49:34.527927   28802 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:34.528329   28802 main.go:141] libmachine: Using API Version  1
	I0416 16:49:34.528351   28802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:34.528682   28802 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:34.528872   28802 main.go:141] libmachine: (ha-587453-m04) Calling .GetState
	I0416 16:49:34.530455   28802 status.go:330] ha-587453-m04 host status = "Stopped" (err=<nil>)
	I0416 16:49:34.530471   28802 status.go:343] host is not running, skipping remaining checks
	I0416 16:49:34.530479   28802 status.go:257] ha-587453-m04 status: &{Name:ha-587453-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (276.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (161.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587453 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0416 16:49:52.967551   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 16:51:16.012387   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-587453 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m40.704532552s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (161.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-587453 --control-plane -v=7 --alsologtostderr
E0416 16:52:26.149524   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-587453 --control-plane -v=7 --alsologtostderr: (1m12.865233011s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-587453 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-966347 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-966347 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m14.834552556s)
--- PASS: TestJSONOutput/start/Command (74.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-966347 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-966347 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-966347 --output=json --user=testUser
E0416 16:54:52.968128   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-966347 --output=json --user=testUser: (7.342771378s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-519348 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-519348 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.629207ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cc5e04e9-1cbf-4f09-bde4-62a4bc23ac8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-519348] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"42f05322-c6bb-4578-8e5a-ff4cb2b86161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18649"}}
	{"specversion":"1.0","id":"7a55fa36-4834-47a7-9367-30b94b2528c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8ba6001c-17cc-4ff6-908d-438d208f3ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig"}}
	{"specversion":"1.0","id":"9ae57e4a-39bb-4fce-bab8-4bd9c27df647","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube"}}
	{"specversion":"1.0","id":"a141c899-96b5-415e-a29a-9cc030bdc07a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"19f2b23d-9b84-4c01-a038-7d51c5206442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5e9b606d-35ed-49f7-b29f-1818c4f1995c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-519348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-519348
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (96.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-892780 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-892780 --driver=kvm2  --container-runtime=containerd: (46.686432829s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-895048 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-895048 --driver=kvm2  --container-runtime=containerd: (47.280054614s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-892780
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-895048
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-895048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-895048
helpers_test.go:175: Cleaning up "first-892780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-892780
--- PASS: TestMinikubeProfile (96.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-949462 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-949462 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.379536823s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-949462 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-949462 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-963800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0416 16:57:26.149059   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-963800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.772303263s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963800 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963800 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.93s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-949462 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963800 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963800 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-963800
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-963800: (1.738629617s)
--- PASS: TestMountStart/serial/Stop (1.74s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.5s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-963800
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-963800: (21.496738784s)
--- PASS: TestMountStart/serial/RestartStopped (22.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963800 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963800 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-895670 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-895670 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m45.563801526s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-895670 -- rollout status deployment/busybox: (2.437552013s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-5qrt7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-dvgld -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-5qrt7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-dvgld -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-5qrt7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-dvgld -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-5qrt7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-5qrt7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-dvgld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-895670 -- exec busybox-7fdf7869d9-dvgld -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-895670 -v 3 --alsologtostderr
E0416 16:59:52.966972   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 17:00:29.195484   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-895670 -v 3 --alsologtostderr: (43.210049106s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-895670 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp testdata/cp-test.txt multinode-895670:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2385491186/001/cp-test_multinode-895670.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670:/home/docker/cp-test.txt multinode-895670-m02:/home/docker/cp-test_multinode-895670_multinode-895670-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m02 "sudo cat /home/docker/cp-test_multinode-895670_multinode-895670-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670:/home/docker/cp-test.txt multinode-895670-m03:/home/docker/cp-test_multinode-895670_multinode-895670-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m03 "sudo cat /home/docker/cp-test_multinode-895670_multinode-895670-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp testdata/cp-test.txt multinode-895670-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2385491186/001/cp-test_multinode-895670-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670-m02:/home/docker/cp-test.txt multinode-895670:/home/docker/cp-test_multinode-895670-m02_multinode-895670.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670 "sudo cat /home/docker/cp-test_multinode-895670-m02_multinode-895670.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670-m02:/home/docker/cp-test.txt multinode-895670-m03:/home/docker/cp-test_multinode-895670-m02_multinode-895670-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m03 "sudo cat /home/docker/cp-test_multinode-895670-m02_multinode-895670-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp testdata/cp-test.txt multinode-895670-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2385491186/001/cp-test_multinode-895670-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670-m03:/home/docker/cp-test.txt multinode-895670:/home/docker/cp-test_multinode-895670-m03_multinode-895670.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670 "sudo cat /home/docker/cp-test_multinode-895670-m03_multinode-895670.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 cp multinode-895670-m03:/home/docker/cp-test.txt multinode-895670-m02:/home/docker/cp-test_multinode-895670-m03_multinode-895670-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 ssh -n multinode-895670-m02 "sudo cat /home/docker/cp-test_multinode-895670-m03_multinode-895670-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-895670 node stop m03: (1.526975973s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-895670 status: exit status 7 (464.167472ms)

                                                
                                                
-- stdout --
	multinode-895670
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-895670-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-895670-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-895670 status --alsologtostderr: exit status 7 (467.264637ms)

                                                
                                                
-- stdout --
	multinode-895670
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-895670-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-895670-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:00:46.673523   36342 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:00:46.673633   36342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:00:46.673645   36342 out.go:304] Setting ErrFile to fd 2...
	I0416 17:00:46.673650   36342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:00:46.673877   36342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 17:00:46.674046   36342 out.go:298] Setting JSON to false
	I0416 17:00:46.674073   36342 mustload.go:65] Loading cluster: multinode-895670
	I0416 17:00:46.674133   36342 notify.go:220] Checking for updates...
	I0416 17:00:46.674581   36342 config.go:182] Loaded profile config "multinode-895670": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 17:00:46.674601   36342 status.go:255] checking status of multinode-895670 ...
	I0416 17:00:46.675083   36342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:00:46.675184   36342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:00:46.692779   36342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0416 17:00:46.693276   36342 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:00:46.693818   36342 main.go:141] libmachine: Using API Version  1
	I0416 17:00:46.693839   36342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:00:46.694177   36342 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:00:46.694413   36342 main.go:141] libmachine: (multinode-895670) Calling .GetState
	I0416 17:00:46.696324   36342 status.go:330] multinode-895670 host status = "Running" (err=<nil>)
	I0416 17:00:46.696345   36342 host.go:66] Checking if "multinode-895670" exists ...
	I0416 17:00:46.696764   36342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:00:46.696813   36342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:00:46.712874   36342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0416 17:00:46.713329   36342 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:00:46.713804   36342 main.go:141] libmachine: Using API Version  1
	I0416 17:00:46.713843   36342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:00:46.714151   36342 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:00:46.714324   36342 main.go:141] libmachine: (multinode-895670) Calling .GetIP
	I0416 17:00:46.717237   36342 main.go:141] libmachine: (multinode-895670) DBG | domain multinode-895670 has defined MAC address 52:54:00:c5:bd:35 in network mk-multinode-895670
	I0416 17:00:46.717707   36342 main.go:141] libmachine: (multinode-895670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:bd:35", ip: ""} in network mk-multinode-895670: {Iface:virbr1 ExpiryTime:2024-04-16 17:58:18 +0000 UTC Type:0 Mac:52:54:00:c5:bd:35 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:multinode-895670 Clientid:01:52:54:00:c5:bd:35}
	I0416 17:00:46.717764   36342 main.go:141] libmachine: (multinode-895670) DBG | domain multinode-895670 has defined IP address 192.168.39.189 and MAC address 52:54:00:c5:bd:35 in network mk-multinode-895670
	I0416 17:00:46.717915   36342 host.go:66] Checking if "multinode-895670" exists ...
	I0416 17:00:46.718360   36342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:00:46.718422   36342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:00:46.736228   36342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39219
	I0416 17:00:46.736680   36342 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:00:46.737197   36342 main.go:141] libmachine: Using API Version  1
	I0416 17:00:46.737240   36342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:00:46.737548   36342 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:00:46.737762   36342 main.go:141] libmachine: (multinode-895670) Calling .DriverName
	I0416 17:00:46.737945   36342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:00:46.737979   36342 main.go:141] libmachine: (multinode-895670) Calling .GetSSHHostname
	I0416 17:00:46.740495   36342 main.go:141] libmachine: (multinode-895670) DBG | domain multinode-895670 has defined MAC address 52:54:00:c5:bd:35 in network mk-multinode-895670
	I0416 17:00:46.740956   36342 main.go:141] libmachine: (multinode-895670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:bd:35", ip: ""} in network mk-multinode-895670: {Iface:virbr1 ExpiryTime:2024-04-16 17:58:18 +0000 UTC Type:0 Mac:52:54:00:c5:bd:35 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:multinode-895670 Clientid:01:52:54:00:c5:bd:35}
	I0416 17:00:46.740990   36342 main.go:141] libmachine: (multinode-895670) DBG | domain multinode-895670 has defined IP address 192.168.39.189 and MAC address 52:54:00:c5:bd:35 in network mk-multinode-895670
	I0416 17:00:46.741143   36342 main.go:141] libmachine: (multinode-895670) Calling .GetSSHPort
	I0416 17:00:46.741325   36342 main.go:141] libmachine: (multinode-895670) Calling .GetSSHKeyPath
	I0416 17:00:46.741472   36342 main.go:141] libmachine: (multinode-895670) Calling .GetSSHUsername
	I0416 17:00:46.741578   36342 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/multinode-895670/id_rsa Username:docker}
	I0416 17:00:46.831890   36342 ssh_runner.go:195] Run: systemctl --version
	I0416 17:00:46.840160   36342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:00:46.856983   36342 kubeconfig.go:125] found "multinode-895670" server: "https://192.168.39.189:8443"
	I0416 17:00:46.857016   36342 api_server.go:166] Checking apiserver status ...
	I0416 17:00:46.857056   36342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:00:46.873203   36342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0416 17:00:46.884583   36342 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:00:46.884638   36342 ssh_runner.go:195] Run: ls
	I0416 17:00:46.890519   36342 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0416 17:00:46.895012   36342 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0416 17:00:46.895038   36342 status.go:422] multinode-895670 apiserver status = Running (err=<nil>)
	I0416 17:00:46.895050   36342 status.go:257] multinode-895670 status: &{Name:multinode-895670 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:00:46.895074   36342 status.go:255] checking status of multinode-895670-m02 ...
	I0416 17:00:46.895412   36342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:00:46.895447   36342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:00:46.910913   36342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I0416 17:00:46.911410   36342 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:00:46.911895   36342 main.go:141] libmachine: Using API Version  1
	I0416 17:00:46.911916   36342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:00:46.912318   36342 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:00:46.912526   36342 main.go:141] libmachine: (multinode-895670-m02) Calling .GetState
	I0416 17:00:46.914137   36342 status.go:330] multinode-895670-m02 host status = "Running" (err=<nil>)
	I0416 17:00:46.914154   36342 host.go:66] Checking if "multinode-895670-m02" exists ...
	I0416 17:00:46.914541   36342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:00:46.914592   36342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:00:46.930316   36342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I0416 17:00:46.930769   36342 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:00:46.931327   36342 main.go:141] libmachine: Using API Version  1
	I0416 17:00:46.931356   36342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:00:46.931688   36342 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:00:46.931905   36342 main.go:141] libmachine: (multinode-895670-m02) Calling .GetIP
	I0416 17:00:46.934823   36342 main.go:141] libmachine: (multinode-895670-m02) DBG | domain multinode-895670-m02 has defined MAC address 52:54:00:ef:ab:86 in network mk-multinode-895670
	I0416 17:00:46.935291   36342 main.go:141] libmachine: (multinode-895670-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ab:86", ip: ""} in network mk-multinode-895670: {Iface:virbr1 ExpiryTime:2024-04-16 17:59:21 +0000 UTC Type:0 Mac:52:54:00:ef:ab:86 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:multinode-895670-m02 Clientid:01:52:54:00:ef:ab:86}
	I0416 17:00:46.935321   36342 main.go:141] libmachine: (multinode-895670-m02) DBG | domain multinode-895670-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ef:ab:86 in network mk-multinode-895670
	I0416 17:00:46.935488   36342 host.go:66] Checking if "multinode-895670-m02" exists ...
	I0416 17:00:46.935780   36342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:00:46.935829   36342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:00:46.951517   36342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0416 17:00:46.951927   36342 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:00:46.952414   36342 main.go:141] libmachine: Using API Version  1
	I0416 17:00:46.952439   36342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:00:46.952743   36342 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:00:46.952900   36342 main.go:141] libmachine: (multinode-895670-m02) Calling .DriverName
	I0416 17:00:46.953071   36342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:00:46.953094   36342 main.go:141] libmachine: (multinode-895670-m02) Calling .GetSSHHostname
	I0416 17:00:46.955984   36342 main.go:141] libmachine: (multinode-895670-m02) DBG | domain multinode-895670-m02 has defined MAC address 52:54:00:ef:ab:86 in network mk-multinode-895670
	I0416 17:00:46.956436   36342 main.go:141] libmachine: (multinode-895670-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:ab:86", ip: ""} in network mk-multinode-895670: {Iface:virbr1 ExpiryTime:2024-04-16 17:59:21 +0000 UTC Type:0 Mac:52:54:00:ef:ab:86 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:multinode-895670-m02 Clientid:01:52:54:00:ef:ab:86}
	I0416 17:00:46.956469   36342 main.go:141] libmachine: (multinode-895670-m02) DBG | domain multinode-895670-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ef:ab:86 in network mk-multinode-895670
	I0416 17:00:46.956651   36342 main.go:141] libmachine: (multinode-895670-m02) Calling .GetSSHPort
	I0416 17:00:46.956821   36342 main.go:141] libmachine: (multinode-895670-m02) Calling .GetSSHKeyPath
	I0416 17:00:46.957007   36342 main.go:141] libmachine: (multinode-895670-m02) Calling .GetSSHUsername
	I0416 17:00:46.957132   36342 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3613/.minikube/machines/multinode-895670-m02/id_rsa Username:docker}
	I0416 17:00:47.044334   36342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:00:47.060282   36342 status.go:257] multinode-895670-m02 status: &{Name:multinode-895670-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:00:47.060331   36342 status.go:255] checking status of multinode-895670-m03 ...
	I0416 17:00:47.060614   36342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:00:47.060646   36342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:00:47.076246   36342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35099
	I0416 17:00:47.076634   36342 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:00:47.077090   36342 main.go:141] libmachine: Using API Version  1
	I0416 17:00:47.077110   36342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:00:47.077410   36342 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:00:47.077606   36342 main.go:141] libmachine: (multinode-895670-m03) Calling .GetState
	I0416 17:00:47.079101   36342 status.go:330] multinode-895670-m03 host status = "Stopped" (err=<nil>)
	I0416 17:00:47.079119   36342 status.go:343] host is not running, skipping remaining checks
	I0416 17:00:47.079128   36342 status.go:257] multinode-895670-m03 status: &{Name:multinode-895670-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-895670 node start m03 -v=7 --alsologtostderr: (28.067834166s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (301.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-895670
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-895670
E0416 17:02:26.149389   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-895670: (3m5.629995176s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-895670 --wait=true -v=8 --alsologtostderr
E0416 17:04:52.967773   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-895670 --wait=true -v=8 --alsologtostderr: (1m55.610819604s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-895670
--- PASS: TestMultiNode/serial/RestartKeepsNodes (301.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-895670 node delete m03: (1.742757184s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 stop
E0416 17:07:26.150518   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 17:07:56.013589   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-895670 stop: (3m3.985410485s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-895670 status: exit status 7 (105.790321ms)

                                                
                                                
-- stdout --
	multinode-895670
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-895670-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-895670 status --alsologtostderr: exit status 7 (100.963809ms)

                                                
                                                
-- stdout --
	multinode-895670
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-895670-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:09:23.658308   38983 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:09:23.658593   38983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:09:23.658604   38983 out.go:304] Setting ErrFile to fd 2...
	I0416 17:09:23.658609   38983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:09:23.658853   38983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 17:09:23.659112   38983 out.go:298] Setting JSON to false
	I0416 17:09:23.659166   38983 mustload.go:65] Loading cluster: multinode-895670
	I0416 17:09:23.659277   38983 notify.go:220] Checking for updates...
	I0416 17:09:23.659686   38983 config.go:182] Loaded profile config "multinode-895670": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0416 17:09:23.659703   38983 status.go:255] checking status of multinode-895670 ...
	I0416 17:09:23.660194   38983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:09:23.660254   38983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:09:23.679627   38983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0416 17:09:23.680101   38983 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:09:23.680825   38983 main.go:141] libmachine: Using API Version  1
	I0416 17:09:23.680854   38983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:09:23.681277   38983 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:09:23.681539   38983 main.go:141] libmachine: (multinode-895670) Calling .GetState
	I0416 17:09:23.683362   38983 status.go:330] multinode-895670 host status = "Stopped" (err=<nil>)
	I0416 17:09:23.683378   38983 status.go:343] host is not running, skipping remaining checks
	I0416 17:09:23.683387   38983 status.go:257] multinode-895670 status: &{Name:multinode-895670 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:09:23.683447   38983 status.go:255] checking status of multinode-895670-m02 ...
	I0416 17:09:23.683741   38983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0416 17:09:23.683784   38983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:09:23.699426   38983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41899
	I0416 17:09:23.699846   38983 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:09:23.700296   38983 main.go:141] libmachine: Using API Version  1
	I0416 17:09:23.700321   38983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:09:23.700631   38983 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:09:23.700829   38983 main.go:141] libmachine: (multinode-895670-m02) Calling .GetState
	I0416 17:09:23.702333   38983 status.go:330] multinode-895670-m02 host status = "Stopped" (err=<nil>)
	I0416 17:09:23.702344   38983 status.go:343] host is not running, skipping remaining checks
	I0416 17:09:23.702350   38983 status.go:257] multinode-895670-m02 status: &{Name:multinode-895670-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-895670 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0416 17:09:52.967655   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-895670 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m21.868017465s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-895670 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-895670
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-895670-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-895670-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (79.300821ms)

                                                
                                                
-- stdout --
	* [multinode-895670-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-895670-m02' is duplicated with machine name 'multinode-895670-m02' in profile 'multinode-895670'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-895670-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-895670-m03 --driver=kvm2  --container-runtime=containerd: (48.928811088s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-895670
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-895670: exit status 80 (254.252259ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-895670 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-895670-m03 already exists in multinode-895670-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-895670-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.13s)

                                                
                                    
x
+
TestPreload (233.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-009884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0416 17:12:26.150081   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-009884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m27.301175227s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-009884 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-009884
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-009884: (1m32.462959185s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-009884 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0416 17:14:52.967793   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-009884 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (52.188908023s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-009884 image list
helpers_test.go:175: Cleaning up "test-preload-009884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-009884
--- PASS: TestPreload (233.88s)

                                                
                                    
x
+
TestScheduledStopUnix (120.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-584224 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-584224 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.212510141s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584224 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-584224 -n scheduled-stop-584224
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584224 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584224 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584224 -n scheduled-stop-584224
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-584224
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584224 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0416 17:17:09.195998   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 17:17:26.150277   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-584224
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-584224: exit status 7 (76.375991ms)

                                                
                                                
-- stdout --
	scheduled-stop-584224
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584224 -n scheduled-stop-584224
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584224 -n scheduled-stop-584224: exit status 7 (76.749507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-584224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-584224
--- PASS: TestScheduledStopUnix (120.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (243.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.773544149 start -p running-upgrade-065207 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.773544149 start -p running-upgrade-065207 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m18.120249965s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-065207 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0416 17:19:52.967008   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-065207 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m43.932278125s)
helpers_test.go:175: Cleaning up "running-upgrade-065207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-065207
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-065207: (1.210095311s)
--- PASS: TestRunningBinaryUpgrade (243.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (234.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-038388 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-038388 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m25.900837016s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-038388
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-038388: (2.380821905s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-038388 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-038388 status --format={{.Host}}: exit status 7 (90.82412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-038388 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-038388 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (49.125440875s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-038388 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-038388 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-038388 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (103.360648ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-038388] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-038388
	    minikube start -p kubernetes-upgrade-038388 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0383882 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-038388 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-038388 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0416 17:22:26.150647   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-038388 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m35.92268062s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-038388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-038388
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-038388: (1.389895286s)
--- PASS: TestKubernetesUpgrade (234.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-051494 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-051494 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (97.392637ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-051494] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestPause/serial/Start (102.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-320499 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-320499 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m42.950067302s)
--- PASS: TestPause/serial/Start (102.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (105.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-051494 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-051494 --driver=kvm2  --container-runtime=containerd: (1m45.611216046s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-051494 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (105.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-320499 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-320499 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (45.4432871s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-051494 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-051494 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (16.479014809s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-051494 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-051494 status -o json: exit status 2 (269.474903ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-051494","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-051494
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-051494: (1.031571641s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-051494 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-051494 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.950117434s)
--- PASS: TestNoKubernetes/serial/Start (29.95s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-320499 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-320499 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-320499 --output=json --layout=cluster: exit status 2 (285.851674ms)

                                                
                                                
-- stdout --
	{"Name":"pause-320499","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-320499","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-320499 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-320499 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-320499 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-320499 --alsologtostderr -v=5: (1.4393314s)
--- PASS: TestPause/serial/DeletePaused (1.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-051494 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-051494 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.935153ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-051494
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-051494: (1.573104572s)
--- PASS: TestNoKubernetes/serial/Stop (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (74.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-051494 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-051494 --driver=kvm2  --container-runtime=containerd: (1m14.767669328s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (74.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-051494 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-051494 "sudo systemctl is-active --quiet service kubelet": exit status 1 (242.09079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-886148 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-886148 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (119.966804ms)

                                                
                                                
-- stdout --
	* [false-886148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:21:26.366756   47250 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:21:26.366916   47250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:21:26.366926   47250 out.go:304] Setting ErrFile to fd 2...
	I0416 17:21:26.366931   47250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:21:26.367146   47250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3613/.minikube/bin
	I0416 17:21:26.367715   47250 out.go:298] Setting JSON to false
	I0416 17:21:26.368713   47250 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":3836,"bootTime":1713284250,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:21:26.368783   47250 start.go:139] virtualization: kvm guest
	I0416 17:21:26.371250   47250 out.go:177] * [false-886148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:21:26.372656   47250 notify.go:220] Checking for updates...
	I0416 17:21:26.374082   47250 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:21:26.375484   47250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:21:26.376982   47250 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3613/kubeconfig
	I0416 17:21:26.378790   47250 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3613/.minikube
	I0416 17:21:26.380182   47250 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:21:26.381661   47250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:21:26.383544   47250 config.go:182] Loaded profile config "kubernetes-upgrade-038388": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0416 17:21:26.383663   47250 config.go:182] Loaded profile config "running-upgrade-065207": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0416 17:21:26.383863   47250 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:21:26.422926   47250 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 17:21:26.424136   47250 start.go:297] selected driver: kvm2
	I0416 17:21:26.424153   47250 start.go:901] validating driver "kvm2" against <nil>
	I0416 17:21:26.424168   47250 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:21:26.426256   47250 out.go:177] 
	W0416 17:21:26.427751   47250 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0416 17:21:26.429226   47250 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-886148 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-886148" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:20:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.72.60:8443
name: running-upgrade-065207
contexts:
- context:
cluster: running-upgrade-065207
user: running-upgrade-065207
name: running-upgrade-065207
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-065207
user:
client-certificate: /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/running-upgrade-065207/client.crt
client-key: /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/running-upgrade-065207/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-886148

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-886148"

                                                
                                                
----------------------- debugLogs end: false-886148 [took: 3.662989526s] --------------------------------
helpers_test.go:175: Cleaning up "false-886148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-886148
--- PASS: TestNetworkPlugins/group/false (3.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (156.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.218188231 start -p stopped-upgrade-698803 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.218188231 start -p stopped-upgrade-698803 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m14.615285268s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.218188231 -p stopped-upgrade-698803 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.218188231 -p stopped-upgrade-698803 stop: (2.274488792s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-698803 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-698803 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m19.80835523s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (156.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (179.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-775840 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-775840 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m59.322734911s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (179.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-698803
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-698803: (1.19975417s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (131.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-136562 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-136562 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (2m11.070225025s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (131.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (130.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-296224 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0416 17:24:36.014199   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
E0416 17:24:52.967609   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-296224 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (2m10.671818146s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (130.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-207341 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-207341 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (1m1.325523279s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-136562 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9110ec9a-1750-44b9-b571-81a2b0bed36a] Pending
helpers_test.go:344: "busybox" [9110ec9a-1750-44b9-b571-81a2b0bed36a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9110ec9a-1750-44b9-b571-81a2b0bed36a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00478852s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-136562 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-136562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-136562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.175056424s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-136562 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-136562 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-136562 --alsologtostderr -v=3: (1m32.546699803s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-296224 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4c351a3f-bf19-4487-8023-2cd9c03d6089] Pending
helpers_test.go:344: "busybox" [4c351a3f-bf19-4487-8023-2cd9c03d6089] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4c351a3f-bf19-4487-8023-2cd9c03d6089] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004846869s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-296224 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-775840 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [213f98fb-6145-4e00-ad6f-2114bf26b356] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [213f98fb-6145-4e00-ad6f-2114bf26b356] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.005920897s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-775840 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-296224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-296224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.01951357s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-296224 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-296224 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-296224 --alsologtostderr -v=3: (1m32.670900909s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-775840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-775840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.122183652s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-775840 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-775840 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-775840 --alsologtostderr -v=3: (1m32.724805385s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-207341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-207341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.013014391s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-207341 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-207341 --alsologtostderr -v=3: (7.346851118s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-207341 -n newest-cni-207341
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-207341 -n newest-cni-207341: exit status 7 (75.096026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-207341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-207341 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
E0416 17:27:26.149289   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-207341 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (34.962067244s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-207341 -n newest-cni-207341
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-207341 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-207341 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-207341 -n newest-cni-207341
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-207341 -n newest-cni-207341: exit status 2 (286.635466ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-207341 -n newest-cni-207341
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-207341 -n newest-cni-207341: exit status 2 (268.064045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-207341 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-207341 -n newest-cni-207341
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-207341 -n newest-cni-207341
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-136562 -n no-preload-136562
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-136562 -n no-preload-136562: exit status 7 (85.269481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-136562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (319.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-136562 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-136562 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (5m19.45453541s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-136562 -n no-preload-136562
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (319.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (116.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-207094 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-207094 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m56.740350339s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (116.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224: exit status 7 (93.63942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-296224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (355.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-296224 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-296224 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m55.294197055s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (355.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-775840 -n old-k8s-version-775840
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-775840 -n old-k8s-version-775840: exit status 7 (85.620319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-775840 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (646.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-775840 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0416 17:29:52.967277   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-775840 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (10m45.729711094s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-775840 -n old-k8s-version-775840
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (646.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-207094 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c7125828-7b16-4ba3-be05-7a0e130394ad] Pending
helpers_test.go:344: "busybox" [c7125828-7b16-4ba3-be05-7a0e130394ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c7125828-7b16-4ba3-be05-7a0e130394ad] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005074847s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-207094 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-207094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-207094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044824923s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-207094 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-207094 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-207094 --alsologtostderr -v=3: (1m32.540817112s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-207094 -n embed-certs-207094
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-207094 -n embed-certs-207094: exit status 7 (85.702901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-207094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (299.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-207094 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0416 17:32:26.149045   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-207094 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m58.878154335s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-207094 -n embed-certs-207094
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (299.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (26.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-nfw7b" [7bf8ca23-d5c8-4de2-a2ad-8b9d955bbd0c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-nfw7b" [7bf8ca23-d5c8-4de2-a2ad-8b9d955bbd0c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 26.005179462s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (26.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-nfw7b" [7bf8ca23-d5c8-4de2-a2ad-8b9d955bbd0c] Running
E0416 17:33:49.196641   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005085952s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-136562 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-136562 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-136562 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-136562 -n no-preload-136562
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-136562 -n no-preload-136562: exit status 2 (289.690868ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-136562 -n no-preload-136562
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-136562 -n no-preload-136562: exit status 2 (272.03166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-136562 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-136562 -n no-preload-136562
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-136562 -n no-preload-136562
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m41.778874399s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pt5dm" [4a20f8c0-b1e1-411f-8ab6-bab66ca97213] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pt5dm" [4a20f8c0-b1e1-411f-8ab6-bab66ca97213] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.008335527s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pt5dm" [4a20f8c0-b1e1-411f-8ab6-bab66ca97213] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007520166s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-296224 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-296224 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-296224 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-296224 --alsologtostderr -v=1: (1.086865114s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224: exit status 2 (304.224395ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224: exit status 2 (292.19493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-296224 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-296224 -n default-k8s-diff-port-296224
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0416 17:34:52.967571   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m7.172018116s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qsk54" [87dcb28d-73b7-49d7-ab5c-404067bf9c70] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005318625s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-886148 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-886148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-txczh" [bb7f7756-2b9d-4f6e-8319-c6448a087f75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-txczh" [bb7f7756-2b9d-4f6e-8319-c6448a087f75] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005649443s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-886148 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-886148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p9kqw" [2e9046b4-518f-49f5-be5e-c7d2a8f4647f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p9kqw" [2e9046b4-518f-49f5-be5e-c7d2a8f4647f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004842546s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-886148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-886148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m39.113790093s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (109.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0416 17:36:18.714162   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:18.719513   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:18.729850   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:18.750211   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:18.790522   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:18.871240   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:19.031782   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:19.352343   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:19.992652   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:21.272882   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:23.833933   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:28.943510   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:28.948850   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:28.955129   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:36:28.959338   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:28.979799   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:29.020307   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:29.101383   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:29.261739   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:29.582322   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:30.223357   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:31.504312   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:34.065407   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:39.186051   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:36:39.196263   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m49.957868308s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (109.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5n26m" [b978c0e6-dea1-4a6b-bfe0-378ea558445a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005761843s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5n26m" [b978c0e6-dea1-4a6b-bfe0-378ea558445a] Running
E0416 17:36:49.426845   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009607791s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-207094 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-207094 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-207094 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-207094 --alsologtostderr -v=1: (1.041036397s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-207094 -n embed-certs-207094
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-207094 -n embed-certs-207094: exit status 2 (315.026099ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-207094 -n embed-certs-207094
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-207094 -n embed-certs-207094: exit status 2 (290.506264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-207094 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-207094 -n embed-certs-207094
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-207094 -n embed-certs-207094
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0416 17:36:59.676783   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
E0416 17:37:09.907601   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
E0416 17:37:26.149101   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/addons-012036/client.crt: no such file or directory
E0416 17:37:40.637871   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/no-preload-136562/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m13.91875785s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4np9n" [6c7218bd-fbdd-4e89-b2c1-f5b09fc63e5b] Running
E0416 17:37:50.868397   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007646909s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-886148 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-886148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lkb9l" [450cd551-7763-49f5-a3e2-970711ffe512] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lkb9l" [450cd551-7763-49f5-a3e2-970711ffe512] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00628226s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-886148 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-886148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-twcz6" [aff2bf9d-b0ca-4c08-a618-b8d94211a458] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-twcz6" [aff2bf9d-b0ca-4c08-a618-b8d94211a458] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.007132569s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-886148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-886148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-886148 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-886148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d5pkz" [785e4052-3045-4023-8e1c-2ce4bdc213e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-d5pkz" [785e4052-3045-4023-8e1c-2ce4bdc213e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004804319s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m28.609620833s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-886148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (122.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-886148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (2m2.156243568s)
--- PASS: TestNetworkPlugins/group/bridge/Start (122.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-644vs" [ebf4fb6d-9e1c-4f1d-8e89-26a792a3cd77] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.013644823s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-644vs" [ebf4fb6d-9e1c-4f1d-8e89-26a792a3cd77] Running
E0416 17:39:12.789446   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/default-k8s-diff-port-296224/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006085775s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-775840 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-775840 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-775840 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-775840 -n old-k8s-version-775840
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-775840 -n old-k8s-version-775840: exit status 2 (276.337833ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-775840 -n old-k8s-version-775840
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-775840 -n old-k8s-version-775840: exit status 2 (273.242012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-775840 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-775840 -n old-k8s-version-775840
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-775840 -n old-k8s-version-775840
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zmbhm" [0d19f87b-57bd-438b-b9f8-d5ef984831e8] Running
E0416 17:39:52.968005   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/functional-505303/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005367479s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-886148 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-886148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-j7zrs" [569be2b5-d721-4379-a9af-94a6c0759271] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-j7zrs" [569be2b5-d721-4379-a9af-94a6c0759271] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004407845s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-886148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-886148 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-886148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jfz7v" [da808fee-4198-4568-8607-d82a51947625] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jfz7v" [da808fee-4198-4568-8607-d82a51947625] Running
E0416 17:40:36.678295   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:36.683607   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:36.693872   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:36.714234   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:36.754581   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:36.835120   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:36.995537   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:37.316174   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:37.957006   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:38.803149   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
E0416 17:40:38.808429   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
E0416 17:40:38.818765   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
E0416 17:40:38.839064   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
E0416 17:40:38.879377   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
E0416 17:40:38.960506   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
E0416 17:40:39.120990   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
E0416 17:40:39.237521   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/kindnet-886148/client.crt: no such file or directory
E0416 17:40:39.441904   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
E0416 17:40:40.082716   10952 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/auto-886148/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004355482s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-886148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-886148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.2/binaries 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
264 TestStartStop/group/disable-driver-mounts 0.19
285 TestNetworkPlugins/group/kubenet 3.5
296 TestNetworkPlugins/group/cilium 4.23
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-113571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-113571
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-886148 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-886148" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:20:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.72.60:8443
name: running-upgrade-065207
contexts:
- context:
cluster: running-upgrade-065207
user: running-upgrade-065207
name: running-upgrade-065207
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-065207
user:
client-certificate: /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/running-upgrade-065207/client.crt
client-key: /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/running-upgrade-065207/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-886148

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-886148"

                                                
                                                
----------------------- debugLogs end: kubenet-886148 [took: 3.344993956s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-886148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-886148
--- SKIP: TestNetworkPlugins/group/kubenet (3.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-886148 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-886148" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18649-3613/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:21:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.72.60:8443
name: running-upgrade-065207
contexts:
- context:
cluster: running-upgrade-065207
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:21:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: running-upgrade-065207
name: running-upgrade-065207
current-context: running-upgrade-065207
kind: Config
preferences: {}
users:
- name: running-upgrade-065207
user:
client-certificate: /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/running-upgrade-065207/client.crt
client-key: /home/jenkins/minikube-integration/18649-3613/.minikube/profiles/running-upgrade-065207/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-886148

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-886148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-886148"

                                                
                                                
----------------------- debugLogs end: cilium-886148 [took: 4.045774898s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-886148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-886148
--- SKIP: TestNetworkPlugins/group/cilium (4.23s)

                                                
                                    
Copied to clipboard